-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Have a Plan to support “coap”? #4
Comments
Not right now. I'm going to add QUIC(udp) support in nuster after import HTTP2 which was added in HAProxy 1.8 recently. Might take a look after that. |
jiangwenyuan
pushed a commit
that referenced
this issue
Mar 10, 2019
Add LEVEL #4 regression testing files which is dedicated to VTC files in relation with bugs they help to reproduce. At the date of this commit, all VTC files are LEVEL 4 VTC files.
jiangwenyuan
pushed a commit
that referenced
this issue
Nov 2, 2019
If the user agent data contains text that has special characters that are used to format the output from the vfprintf() function, haproxy crashes. String "%s %s %s" may be used as an example. % curl -A "%s %s %s" localhost:10080/index.html curl: (52) Empty reply from server haproxy log: 00000000:WURFL-test.clireq[00c7:ffffffff]: GET /index.html HTTP/1.1 00000000:WURFL-test.clihdr[00c7:ffffffff]: host: localhost:10080 00000000:WURFL-test.clihdr[00c7:ffffffff]: user-agent: %s %s %s 00000000:WURFL-test.clihdr[00c7:ffffffff]: accept: */* segmentation fault (core dumped) gdb 'where' output: #0 strlen () at ../sysdeps/x86_64/strlen.S:106 #1 0x00007f7c014a8da8 in _IO_vfprintf_internal (s=s@entry=0x7ffc808fe750, format=<optimized out>, format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n", ap=ap@entry=0x7ffc808fe8b8) at vfprintf.c:1637 #2 0x00007f7c014cfe89 in _IO_vsnprintf ( string=0x55cb772c34e0 "WURFL: retrieve header request returns [(null) %s %s %s B,w\313U", maxlen=<optimized out>, format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n", args=args@entry=0x7ffc808fe8b8) at vsnprintf.c:114 #3 0x000055cb758f898f in send_log (p=p@entry=0x0, level=level@entry=5, format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n") at src/log.c:1477 #4 0x000055cb75845e0b in ha_wurfl_log ( message=message@entry=0x55cb75989460 "WURFL: retrieve header request returns [%s]\n") at src/wurfl.c:47 #5 0x000055cb7584614a in ha_wurfl_retrieve_header (header_name=<optimized out>, wh=0x7ffc808fec70) at src/wurfl.c:763 In case WURFL (actually HAProxy) is not compiled with debug option enabled (-DWURFL_DEBUG), this bug does not come to light. This patch could be backported in every version supporting the ScientiaMobile's WURFL. (as far as 1.7) (cherry picked from commit f0eb3739ac5460016455cd606d856e7bd2b142fb) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 5e1c146) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 9a5cba4) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 866fbea) Signed-off-by: Willy Tarreau <w@1wt.eu>
jiangwenyuan
pushed a commit
that referenced
this issue
Nov 2, 2019
If the user agent data contains text that has special characters that are used to format the output from the vfprintf() function, haproxy crashes. String "%s %s %s" may be used as an example. % curl -A "%s %s %s" localhost:10080/index.html curl: (52) Empty reply from server haproxy log: 00000000:WURFL-test.clireq[00c7:ffffffff]: GET /index.html HTTP/1.1 00000000:WURFL-test.clihdr[00c7:ffffffff]: host: localhost:10080 00000000:WURFL-test.clihdr[00c7:ffffffff]: user-agent: %s %s %s 00000000:WURFL-test.clihdr[00c7:ffffffff]: accept: */* segmentation fault (core dumped) gdb 'where' output: #0 strlen () at ../sysdeps/x86_64/strlen.S:106 #1 0x00007f7c014a8da8 in _IO_vfprintf_internal (s=s@entry=0x7ffc808fe750, format=<optimized out>, format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n", ap=ap@entry=0x7ffc808fe8b8) at vfprintf.c:1637 #2 0x00007f7c014cfe89 in _IO_vsnprintf ( string=0x55cb772c34e0 "WURFL: retrieve header request returns [(null) %s %s %s B,w\313U", maxlen=<optimized out>, format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n", args=args@entry=0x7ffc808fe8b8) at vsnprintf.c:114 #3 0x000055cb758f898f in send_log (p=p@entry=0x0, level=level@entry=5, format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n") at src/log.c:1477 #4 0x000055cb75845e0b in ha_wurfl_log ( message=message@entry=0x55cb75989460 "WURFL: retrieve header request returns [%s]\n") at src/wurfl.c:47 #5 0x000055cb7584614a in ha_wurfl_retrieve_header (header_name=<optimized out>, wh=0x7ffc808fec70) at src/wurfl.c:763 In case WURFL (actually HAProxy) is not compiled with debug option enabled (-DWURFL_DEBUG), this bug does not come to light. This patch could be backported in every version supporting the ScientiaMobile's WURFL. (as far as 1.7) (cherry picked from commit f0eb3739ac5460016455cd606d856e7bd2b142fb) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 5e1c146) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 9a5cba4) Signed-off-by: Willy Tarreau <w@1wt.eu>
jiangwenyuan
pushed a commit
that referenced
this issue
Nov 2, 2019
If the user agent data contains text that has special characters that are used to format the output from the vfprintf() function, haproxy crashes. String "%s %s %s" may be used as an example. % curl -A "%s %s %s" localhost:10080/index.html curl: (52) Empty reply from server haproxy log: 00000000:WURFL-test.clireq[00c7:ffffffff]: GET /index.html HTTP/1.1 00000000:WURFL-test.clihdr[00c7:ffffffff]: host: localhost:10080 00000000:WURFL-test.clihdr[00c7:ffffffff]: user-agent: %s %s %s 00000000:WURFL-test.clihdr[00c7:ffffffff]: accept: */* segmentation fault (core dumped) gdb 'where' output: #0 strlen () at ../sysdeps/x86_64/strlen.S:106 #1 0x00007f7c014a8da8 in _IO_vfprintf_internal (s=s@entry=0x7ffc808fe750, format=<optimized out>, format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n", ap=ap@entry=0x7ffc808fe8b8) at vfprintf.c:1637 #2 0x00007f7c014cfe89 in _IO_vsnprintf ( string=0x55cb772c34e0 "WURFL: retrieve header request returns [(null) %s %s %s B,w\313U", maxlen=<optimized out>, format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n", args=args@entry=0x7ffc808fe8b8) at vsnprintf.c:114 #3 0x000055cb758f898f in send_log (p=p@entry=0x0, level=level@entry=5, format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n") at src/log.c:1477 #4 0x000055cb75845e0b in ha_wurfl_log ( message=message@entry=0x55cb75989460 "WURFL: retrieve header request returns [%s]\n") at src/wurfl.c:47 #5 0x000055cb7584614a in ha_wurfl_retrieve_header (header_name=<optimized out>, wh=0x7ffc808fec70) at src/wurfl.c:763 In case WURFL (actually HAProxy) is not compiled with debug option enabled (-DWURFL_DEBUG), this bug does not come to light. This patch could be backported in every version supporting the ScientiaMobile's WURFL. (as far as 1.7) (cherry picked from commit f0eb3739ac5460016455cd606d856e7bd2b142fb) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 5e1c146) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
jiangwenyuan
pushed a commit
that referenced
this issue
Dec 9, 2019
If the user agent data contains text that has special characters that are used to format the output from the vfprintf() function, haproxy crashes. String "%s %s %s" may be used as an example. % curl -A "%s %s %s" localhost:10080/index.html curl: (52) Empty reply from server haproxy log: 00000000:WURFL-test.clireq[00c7:ffffffff]: GET /index.html HTTP/1.1 00000000:WURFL-test.clihdr[00c7:ffffffff]: host: localhost:10080 00000000:WURFL-test.clihdr[00c7:ffffffff]: user-agent: %s %s %s 00000000:WURFL-test.clihdr[00c7:ffffffff]: accept: */* segmentation fault (core dumped) gdb 'where' output: #0 strlen () at ../sysdeps/x86_64/strlen.S:106 #1 0x00007f7c014a8da8 in _IO_vfprintf_internal (s=s@entry=0x7ffc808fe750, format=<optimized out>, format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n", ap=ap@entry=0x7ffc808fe8b8) at vfprintf.c:1637 #2 0x00007f7c014cfe89 in _IO_vsnprintf ( string=0x55cb772c34e0 "WURFL: retrieve header request returns [(null) %s %s %s B,w\313U", maxlen=<optimized out>, format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n", args=args@entry=0x7ffc808fe8b8) at vsnprintf.c:114 #3 0x000055cb758f898f in send_log (p=p@entry=0x0, level=level@entry=5, format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n") at src/log.c:1477 #4 0x000055cb75845e0b in ha_wurfl_log ( message=message@entry=0x55cb75989460 "WURFL: retrieve header request returns [%s]\n") at src/wurfl.c:47 #5 0x000055cb7584614a in ha_wurfl_retrieve_header (header_name=<optimized out>, wh=0x7ffc808fec70) at src/wurfl.c:763 In case WURFL (actually HAProxy) is not compiled with debug option enabled (-DWURFL_DEBUG), this bug does not come to light. This patch could be backported in every version supporting the ScientiaMobile's WURFL. (as far as 1.7) (cherry picked from commit f0eb3739ac5460016455cd606d856e7bd2b142fb) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com>
jiangwenyuan
pushed a commit
that referenced
this issue
Feb 13, 2020
Squashed commit of the following: commit 84aad5b569f5bc99f915f12531993236de72b714 Author: Willy Tarreau <w@1wt.eu> Date: Fri Oct 25 09:59:17 2019 +0200 [RELEASE] Released version 1.7.12 Released version 1.7.12 with the following main changes : - BUG/MINOR: checks: Fix check->health computation for flapping servers - BUG/MINOR: map: correctly track reference to the last ref_elt being dumped - BUG/MINOR: lua: ensure large proxy IDs can be represented - BUG/MINOR: spoe: Mistake in error message about SPOE configuration - BUG/MINOR: lua: Socket.send threw runtime error: 'close' needs 1 arguments. - BUG/MINOR: ssl/lua: prevent lua from affecting automatic maxconn computation - BUG/MEDIUM: lua/socket: Length required read doesn't work - MINOR: task/notification: Is notifications registered ? - BUG/MEDIUM: lua/socket: wrong scheduling for sockets - BUG/MAJOR: lua: Dead lock with sockets - BUG/MEDIUM: lua/socket: Notification error - BUG/MEDIUM: lua/socket: Sheduling error on write: may dead-lock - BUG/MEDIUM: lua/socket: Buffer error, may segfault - BUG/MINOR: lua: Segfaults with wrong usage of types. - BUG/MEDIUM: stream-int: don't immediately enable reading when the buffer was reportedly full - BUG/MEDIUM: stats: don't ask for more data as long as we're responding - BUG/MINOR: servers: Don't make "server" in a frontend fatal. - BUG/MINOR: config: stick-table is not supported in defaults section - SCRIPTS: git-show-backports: add missing quotes to "echo" - BUG/MAJOR: map: fix a segfault when using http-request set-map - BUILD: Generate sha256 checksums in publish-release - BUG/MEDIUM: lua: possible CLOSE-WAIT state with '\n' headers - BUG/MEDIUM: ssl: fix missing error loading a keytype cert from a bundle. - BUG/MEDIUM: ssl: loading dh param from certifile causes unpredictable error. - BUG/MINOR: lua: Bad HTTP client request duration. - BUG/MEDIUM: lua: socket timeouts are not applied - BUG/MEDIUM: queue: prevent a backup server from draining the proxy's connections - BUG/MINOR: map: fix map_regm with backref - DOC: Fix spelling error in configuration doc - BUG/MEDIUM: lua: reset lua transaction between http requests - BUG/MEDIUM: hlua: Make sure we drain the output buffer when done. - BUG/MINOR: tools: fix set_net_port() / set_host_port() on IPv4 - DOC: clarify force-private-cache is an option - BUG/MEDIUM: buffers: Make sure we don't wrap in buffer_insert_line2/replace2. - MINOR: server: Use memcpy() instead of strncpy(). - MINOR: cfgparse: Write 130 as 128 as 0x82 and 0x80. - MINOR: peers: use defines instead of enums to appease clang. - DOC: fix reference to map files in MAINTAINERS - BUG/MINOR: config: Copy default error messages when parsing of a backend starts - BUG/MEDIUM: sample: Don't treat SMP_T_METH as SMP_T_STR. - MINOR: stats: report the number of active jobs and listeners in "show info" - BUG: dns: Prevent stack-exhaustion via recursion loop in dns_read_name - BUG: dns: Prevent out-of-bounds read in dns_read_name() - BUG: dns: Prevent out-of-bounds read in dns_validate_dns_response() - BUG/MEDIUM: dns: Don't prevent reading the last byte of the payload in dns_validate_response() - BUG: dns: Fix out-of-bounds read via signedness error in dns_validate_dns_response() - DOC: Update configuration doc about the maximum number of stick counters. - DOC: restore note about "independant" typo - BUG/MEDIUM: dns: overflowed dns name start position causing invalid dns error - BUG/MAJOR: stream-int: Update the stream expiration date in stream_int_notify() - BUG/MEDIUM: ssl: missing allocation failure checks loading tls key file - BUG/MINOR: backend: don't use url_param_name as a hint for BE_LB_ALGO_PH - BUG/MINOR: backend: balance uri specific options were lost across defaults - BUG/MINOR: backend: BE_LB_LKUP_CHTREE is a value, not a bit - BUG/MINOR: stick_table: Prevent conn_cur from underflowing - BUG/MINOR: server: don't always trust srv_check_health when loading a server state - BUG/MINOR: check: Wake the check task if the check is finished in wake_srv_chk() - DOC: mention the effect of nf_conntrack_tcp_loose on src/dst - BUG/MINOR: deinit: tcp_rep.inspect_rules not deinit, add to deinit - SCRIPTS: add the slack channel URL to the announce script - SCRIPTS: add the issue tracker URL to the announce script - BUG/MINOR: stream: don't close the front connection when facing a backend error - BUG/MEDIUM: stream: Don't forget to free s->unique_id in stream_free(). - BUG/MAJOR: config: verify that targets of track-sc and stick rules are present - BUG/MAJOR: spoe: verify that backends used by SPOE cover all their callers' processes - BUG/MINOR: config: make sure to count the error on incorrect track-sc/stick rules - BUG/MAJOR: stream: avoid double free on unique_id - BUG/MEDIUM: 51d: fix possible segfault on deinit_51degrees() - BUG/MAJOR: stats: Fix how huge POST data are read from the channel - BUG/MINOR: tcp: Don't alter counters returned by tcp info fetchers - BUG/MINOR: http/counters: fix missing increment of fe->srv_aborts - BUG/MINOR: http: Call stream_inc_be_http_req_ctr() only one time per request - BUG/MINOR: http-rules: mention "deny_status" for "deny" in the error message - BUG/MEDIUM: proto-http: Always start the parsing if there is no outgoing data - BUG/MEDIUM: http: also reject messages where "chunked" is missing from transfer-enoding - BUG/MAJOR: checks: segfault during tcpcheck_main - BUG/MEDIUM: tcp-check: unbreak multiple connect rules again - BUILD: makefile: work around an old bug in GNU make-3.80 - BUILD: makefile: use :space: instead of digits to count commits - BUILD: makefile: do not rely on shell substitutions to determine git version - BUG/MEDIUM: peers: fix a case where peer session is not cleanly reset on release. - BUG/MEDIUM: pattern: assign pattern IDs after checking the config validity - BUG/MEDIUM: maps: only try to parse the default value when it's present - BUG/MINOR: acl: properly detect pattern type SMP_T_ADDR - BUG/MAJOR: map/acl: real fix segfault during show map/acl on CLI - BUG/MINOR: acl: Fix memory leaks when an ACL expression is parsed - MINOR: config: Test validity of tune.maxaccept during the config parsing - CLEANUP: config: Don't alter listener->maxaccept when nbproc is set to 1 - BUG/MEDIUM: vars: make sure the scope is always valid when accessing vars - BUG/MEDIUM: vars: make the tcp/http unset-var() action support conditions - BUG/MEDIUM: compression: Set Vary: Accept-Encoding for compressed responses - DOC: improve the wording in CONTRIBUTING about how to document a bug fix - BUG/MEDIUM: hlua: Check the calling direction in lua functions of the HTTP class - MINOR: hlua: Don't set request analyzers on response channel for lua actions - MINOR: hlua: Add a flag on the lua txn to know in which context it can be used - BUG/MINOR: hlua: Only execute functions of HTTP class if the txn is HTTP ready - BUG/MINOR: lua: Set right direction and flags on new HTTP objects - BUG/MINOR: lua: Properly initialize the buffer's fields for string samples in hlua_lua2(smp|arg) - BUG/MEDIUM: lb-chash: Fix the realloc() when the number of nodes is increased - BUG/MEDIUM: lb-chash: Ensure the tree integrity when server weight is increased - BUG/MINOR: stream-int: also update analysers timeouts on activity - BUG/MINOR: haproxy: fix rule->file memory leak - BUG/MEDIUM: namespace: close open namespaces during soft shutdown - BUG/MINOR: ssl: free the sni_keytype nodes - BUG/MINOR: ssl: abort on sni allocation failure - BUG/MINOR: ssl: abort on sni_keytypes allocation failure - BUG/MINOR: ssl: Fix fd leak on error path when a TLS ticket keys file is parsed - BUG/MINOR: stick-table: Never exceed (MAX_SESS_STKCTR-1) when fetching a stkctr - BUG/MINOR: sample: Make the `field` converter compatible with `-m found` - BUILD/MINOR: ssl: silence a build warning about const and 'cipher' - BUG/MEDIUM: da: cast the chunk to string. - BUG/MINOR: WURFL: fix send_log() function arguments - DOC: Fix documentation about the cli command to get resolver stats - BUG/MINOR: stick-table: fix an incorrect 32 to 64 bit key conversion commit 34b473fd9058430535f2af2084a0c60fcb2adac0 Author: Willy Tarreau <w@1wt.eu> Date: Wed Oct 23 06:21:05 2019 +0200 BUG/MINOR: stick-table: fix an incorrect 32 to 64 bit key conversion As reported in issue #331, the code used to cast a 32-bit to a 64-bit stick-table key is wrong. It only copies the 32 lower bits in place on little endian machines or overwrites the 32 higher ones on big endian machines. It ought to simply remove the wrong cast dereference. This bug was introduced when changing stick table keys to samples in 1.6-dev4 by commit bc8c404449 ("MAJOR: stick-tables: use sample types in place of dedicated types") so it the fix must be backported as far as 1.6. (cherry picked from commit 28c63c15f572a1afeabfdada6a0a4f4d023d05fc) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 6fe22ed08a642d27f1a228c6f3b7f9f0dd0ea4cd) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 5c779f766524d5d6a754710a5a8a4d3d3013d2e6) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit ff19ac8ca311febb29269ec47db7879b7ad3d10e) [wt: adjusted context] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 3f49d2d1e31fb1842b8de4f9acd1f5da3cf60783 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Sep 27 10:45:47 2019 +0200 DOC: Fix documentation about the cli command to get resolver stats In the management guide, this command was still referenced as "show stat resolvers" instead of "show resolvers". The cli command was fixed by the commit ff88efbd7 ("BUG/MINOR: dns: Fix CLI keyword declaration"). This patch fixes the issue #296. It can be backported as fas as 1.7. (cherry picked from commit 78c430616552e024fc1e7a6650302702ae4544d1) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 7a5303a392ffa16de95f21b5d1ce4acb9a1778cf) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 61a74eaf84dad6d04986e517a71e172b53a15f80) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 085b1eb5e555898e0316a365e37c8b546c91fe1f) [wt: context] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 26f4f241d311f4573a9437cc0da98fa4a5f8948f Author: Miroslav Zagorac <mzagorac@haproxy.com> Date: Mon Oct 14 17:15:56 2019 +0200 BUG/MINOR: WURFL: fix send_log() function arguments If the user agent data contains text that has special characters that are used to format the output from the vfprintf() function, haproxy crashes. String "%s %s %s" may be used as an example. % curl -A "%s %s %s" localhost:10080/index.html curl: (52) Empty reply from server haproxy log: 00000000:WURFL-test.clireq[00c7:ffffffff]: GET /index.html HTTP/1.1 00000000:WURFL-test.clihdr[00c7:ffffffff]: host: localhost:10080 00000000:WURFL-test.clihdr[00c7:ffffffff]: user-agent: %s %s %s 00000000:WURFL-test.clihdr[00c7:ffffffff]: accept: */* segmentation fault (core dumped) gdb 'where' output: #0 strlen () at ../sysdeps/x86_64/strlen.S:106 #1 0x00007f7c014a8da8 in _IO_vfprintf_internal (s=s@entry=0x7ffc808fe750, format=<optimized out>, format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n", ap=ap@entry=0x7ffc808fe8b8) at vfprintf.c:1637 #2 0x00007f7c014cfe89 in _IO_vsnprintf ( string=0x55cb772c34e0 "WURFL: retrieve header request returns [(null) %s %s %s B,w\313U", maxlen=<optimized out>, format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n", args=args@entry=0x7ffc808fe8b8) at vsnprintf.c:114 #3 0x000055cb758f898f in send_log (p=p@entry=0x0, level=level@entry=5, format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n") at src/log.c:1477 #4 0x000055cb75845e0b in ha_wurfl_log ( message=message@entry=0x55cb75989460 "WURFL: retrieve header request returns [%s]\n") at src/wurfl.c:47 #5 0x000055cb7584614a in ha_wurfl_retrieve_header (header_name=<optimized out>, wh=0x7ffc808fec70) at src/wurfl.c:763 In case WURFL (actually HAProxy) is not compiled with debug option enabled (-DWURFL_DEBUG), this bug does not come to light. This patch could be backported in every version supporting the ScientiaMobile's WURFL. (as far as 1.7) (cherry picked from commit f0eb3739ac5460016455cd606d856e7bd2b142fb) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 5e1c1468789d80213325054b6fc1dbd1c70d7776) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 9a5cba43d2da5cb80791868e74eed8b0b9f11148) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 866fbea0c3067387b4c2c0a5d01c838551456fdb) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 473a05083221e82d4ff4ed100219c1fb19c442c3 Author: David Carlier <dcarlier@afilias.info> Date: Wed Jul 10 21:19:24 2019 +0100 BUG/MEDIUM: da: cast the chunk to string. in fetch mode, the output was incorrect, setting the type to string explicitally. This should be backported to all stable versions. (cherry picked from commit 7df4185f3c0ca87fb1fe47522af452d98e0e36e7) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 9d6e73391d0d6ebb41d6eb05340c5cdb0bc26df3) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit ea764c59a70cfeaa575d6f9de936090e1fdf4627) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 21c34e6f4ee9f062c4d05ab863a0ce95ce41e281) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 722d0011e665e5f6798f8c4b5bfb04392a6090f1 Author: Willy Tarreau <w@1wt.eu> Date: Tue Oct 22 12:49:13 2019 +0200 BUILD/MINOR: ssl: silence a build warning about const and 'cipher' sk_SSL_CIPHER_value() returns a const, stored in local variable "cipher" which is not const, causing a build warning to be emitted. Let's turn "cipher" to const to address it. It only affects 1.7 and maybe earlier, since this part was reworked in 1.8. commit 76adf7a64009a9fdfec058d22bad3aa7df2e8379 Author: Tim Duesterhus <tim@bastelstu.be> Date: Wed Oct 16 15:11:15 2019 +0200 BUG/MINOR: sample: Make the `field` converter compatible with `-m found` Previously an expression like: path,field(2,/) -m found always returned `true`. Bug exists since the `field` converter exists. That is: f399b0debfc6c7dc17c6ad503885c911493add56 The fix should be backported to 1.6+. (cherry picked from commit 4381d26edc03faa46401eb0fe82fd7be84be14fd) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 4fa9857b3dc57703c99982a140df5d8119351262) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit d513ff45727e0f1ad1b768723d073ed1b19acd86) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 6a8e7f238799b0f5928e0b2ddc3bcf3be5bed2bf) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 56e602c293b6ddb48e4edf54e41ec95f72ea8c9a Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon Oct 21 10:53:34 2019 +0200 BUG/MINOR: stick-table: Never exceed (MAX_SESS_STKCTR-1) when fetching a stkctr When a stick counter is fetched, it is important that the requested counter does not exceed (MAX_SESS_STKCTR -1). Actually, there is no bug with a default build because, by construction, MAX_SESS_STKCTR is defined to 3 and we know that we never exceed the max value. scN_* sample fetches are numbered from 0 to 2. For other sample fetches, the value is tested. But there is a bug if MAX_SESS_STKCTR is set to a lower value. For instance 1. In this case the counters sc1_* and sc2_* may be undefined. This patch fixes the issue #330. It must be backported as far as 1.7. (cherry picked from commit a9fa88a1eac9bd0ad2cfb761c4b69fd500a1b056) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 33c7e12479cb9bdc2e7e3783fda78a1b2c242363) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit e5c0021542f17019148f0abb4a93b254f86226f3) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 50bd8d6987bef6d9e2719cdaeeb76bf5cc1674c7) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 5163d7eb6ed82b081f03e4c38138156d7e322a46 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon Oct 21 09:55:49 2019 +0200 BUG/MINOR: ssl: Fix fd leak on error path when a TLS ticket keys file is parsed When an error occurred in the function bind_parse_tls_ticket_keys(), during the configuration parsing, the opened file is not always closed. To fix the bug, all errors are catched at the same place, where all ressources are released. This patch fixes the bug #325. It must be backported as far as 1.7. (cherry picked from commit e566f3db11e781572382e9bfff088a26dcdb75c5) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 2bbc80ded1bc90dbf406e255917a1aa59c52902c) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit dd68cbd4797ef27040d0fae5db6e7a7da8ca0952) [wt: context adjustments around the different use of base64dec()] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 8af0b88c05ddb41340a2b9213a5f701d34fc6df3) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 47e37b0e8ce1073ae0b698094ff4dad620ed3fed Author: William Lallemand <wlallemand@haproxy.com> Date: Fri Oct 4 17:36:55 2019 +0200 BUG/MINOR: ssl: abort on sni_keytypes allocation failure The ssl_sock_populate_sni_keytypes_hplr() function does not return an error upon an allocation failure. The process would probably crash during the configuration parsing if the allocation fail since it tries to copy some data in the allocated memory. This patch could be backported as far as 1.5. (cherry picked from commit 28a8fce485a94b636f6905134509c1150690b60f) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 4801c70bead696bed077fd71a55f6ff35bb6f9f5) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit e697d6d366ee13b7645c0886c97d912242bff87d) [wt: context] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 072124d62e9b8166fd7ce5968885ec1f5ef96f74) Signed-off-by: Willy Tarreau <w@1wt.eu> commit b6766e41ea821ba08e78d642ffb5f1ff16c956d6 Author: William Lallemand <wlallemand@haproxy.com> Date: Thu Oct 3 23:46:33 2019 +0200 BUG/MINOR: ssl: abort on sni allocation failure The ssl_sock_add_cert_sni() function never return an error when a sni_ctx allocation fail. It silently ignores the problem and continues to try to allocate other snis. It is unlikely that a sni allocation will succeed after one failure and start a configuration without all the snis. But to avoid any problem we return a -1 upon an sni allocation error and stop the configuration parsing. This patch must be backported in every version supporting the crt-list sni filters. (as far as 1.5) (cherry picked from commit fe49bb3d0c046628d67d57da15a7034cc2230432) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> [Cf: slightly adapted for 2.0] (cherry picked from commit 24e292c1054616e06b1025441ed7a0a59171d108) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 63e63cb341cfb7607c83a085700ac9bf8fe14a3a) [wt: minor adjustments. Note: ssl_sock_add_cert_sni()'s return values are not documented, only did minor validity checks, at least it builds] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit c854d49877e2739cf167dae2d2b62f55f2075352) [wt: more context adjustments] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 26dffc790167df4176d80eb90c5f645b1725b8c1 Author: William Lallemand <wlallemand@haproxy.com> Date: Fri Oct 4 17:24:39 2019 +0200 BUG/MINOR: ssl: free the sni_keytype nodes This patch frees the sni_keytype nodes once the sni_ctxs have been allocated in ssl_sock_load_multi_ckchn(); Could be backported in every version using the multi-cert SSL bundles. (cherry picked from commit 8ed5b965872e3b0cd6605f37bc8fe9f2819ce03c) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit e92c030dfcb298daa7175a789a2fdea42a4784c5) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit eb5e2b9be62454d927532a135b0519d5e232edd4) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit e6f11545eb58aeb131b3de1b1651273c62c8b9fc) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 591ee68a590ed8d6ffc5b3274eeee9dbb37627c0 Author: Krisztian Kovacs <krisztian.kovacs@oneidentity.com> Date: Tue Sep 24 14:12:13 2019 +0200 BUG/MEDIUM: namespace: close open namespaces during soft shutdown When doing a soft shutdown, we won't be making new connections anymore so there's no point in keeping the namespace file descriptors open anymore. Keeping these open effectively makes it impossible to properly clean up namespaces which are no longer used in the new configuration until all previously opened connections are closed in the old worker process. This change introduces a cleanup function that is called during soft shutdown that closes all namespace file descriptors by iterating over the namespace ebtree. (cherry picked from commit 710d987cd62ab0779418f14aa2168dc10ef6bac7) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 6d215536f4aa2e3c95fde9d001a1c894d4eecb93) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 5a861225731bf181bef3e0b6b92490dd836bd52c) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit c22da9a9e043ab121e21588217861c119fde9483) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 3edee47f76bdd277d7dc6f9a37d767e2e4d555dd Author: Dragan Dosen <ddosen@haproxy.com> Date: Tue Apr 30 00:38:36 2019 +0200 BUG/MINOR: haproxy: fix rule->file memory leak When using the "use_backend" configuration directive, the configuration file name stored as rule->file was not freed in some situations. This was introduced in commit 4ed1c95 ("MINOR: http/conf: store the use_backend configuration file and line for logs"). This patch should be backported to 1.9, 1.8 and 1.7. (cherry picked from commit 2a7c20f602e5d40e9f23c703fbcb12e3af762337) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 60277d1a38b45b014478d33627a9bbb99cc9ee9e) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit ccb3136727d1fd5efccd4689199aa29f530f6ed0) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 5c19aa8157aaa00eda2b0f9b9e69ca7aac765cac Author: Willy Tarreau <w@1wt.eu> Date: Thu Aug 1 18:51:38 2019 +0200 BUG/MINOR: stream-int: also update analysers timeouts on activity Between 1.6 and 1.7, some parts of the stream forwarding process were moved into lower layers and the stream-interface had to keep the stream's task up to date regarding the timeouts. The analyser timeouts were not updated there as it was believed this was not needed during forwarding, but actually there is a case for this which is "option contstats" which periodically triggers the analyser timeout, and this change broke the option in case of sustained traffic (if there is some I/O activity during the same millisecond as the timeout expires, then the update will be missed). This patch simply brings back the analyser expiration updates from process_stream() to stream_int_notify(). It may be backported as far as 1.7, taking care to adjust the fields names if needed. (cherry picked from commit 45bcb37f0f8fa1e16dd9358a59dc280a38834dcd) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 7343c710152c586a232a194ef37a56af636d6a56) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 542c6ffc6c2607cda9885d52b44e3dccbd704c61) [wt: s/sio/si_opposite(si)] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 3f78bfcbc5ba98b143751b205e6b13a0f7b9811a) Signed-off-by: Willy Tarreau <w@1wt.eu> commit cad491e3e9cd04024238d030145195b3f2dd2fd1 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Thu Aug 1 10:09:29 2019 +0200 BUG/MEDIUM: lb-chash: Ensure the tree integrity when server weight is increased When the server weight is increased in consistant hash, extra nodes have to be allocated. So a realloc() is performed on the nodes array of the server. the previous commit 962ea7732 ("BUG/MEDIUM: lb-chash: Remove all server's entries before realloc() to re-insert them after") have fixed the size used during the realloc() to avoid segfaults. But another bug remains. After the realloc(), the memory area allocated for the nodes array may change, invalidating all node addresses in the chash tree. So, to fix the bug, we must remove all server's entries from the chash tree before the realloc to insert all of them after, old nodes and new ones. The insert will be automatically handled by the loop at the end of the function chash_queue_dequeue_srv(). Note that if the call to realloc() failed, no new entries will be created for the server, so the effective server weight will be unchanged. This issue was reported on Github (#189). This patch must be backported to all versions since the 1.6. (cherry picked from commit 0a52c17f819a5b0a17718b605bdd990b9e2b58e6) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 0fc2d46fabb2b9317daf7030162e828c7e1684d5) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 6bba81f8c32090343ca3ae258aae2056bb119c8e) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit dff61590ab6a8e154c43444e813895854ce410f5) [wt: s/next_eweight/eweight] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 1c2722b8e2829e94a0aac185f73da91ef00b3520 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Jul 26 13:52:13 2019 +0200 BUG/MEDIUM: lb-chash: Fix the realloc() when the number of nodes is increased When the number of nodes is increased because the server weight is changed, the nodes array must be realloc. But its new size is not correctly set. Only the total number of nodes is used to set the new size. But it must also depends on the size of a node. It must be the total nomber of nodes times the size of a node. This issue was reported on Github (#189). This patch must be backported to all versions since the 1.6. (cherry picked from commit 366ad86af72c455cc958943913cb2de20eefee71) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 974c6916ba2f7efc83193bb8c04e95294ca21112) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 4e633502827c2f8b607d71ae01dc5a6ec70ba785) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 5c0e731a031057099c2f42ef1807f2454badc983) [wt: s/next_eweight/eweight] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 526b7cb0b3b4f1016f973c0e04eedeba6dff39c5 Author: Tim Duesterhus <tim@bastelstu.be> Date: Sun Sep 29 23:03:07 2019 +0200 BUG/MINOR: lua: Properly initialize the buffer's fields for string samples in hlua_lua2(smp|arg) `size` is used in conditional jumps and valgrind complains: ==24145== Conditional jump or move depends on uninitialised value(s) ==24145== at 0x4B3028: smp_is_safe (sample.h:98) ==24145== by 0x4B3028: smp_make_safe (sample.h:125) ==24145== by 0x4B3028: smp_to_stkey (stick_table.c:936) ==24145== by 0x4B3F2A: sample_conv_in_table (stick_table.c:1113) ==24145== by 0x420AD4: hlua_run_sample_conv (hlua.c:3418) ==24145== by 0x54A308F: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0) ==24145== by 0x54AFEFC: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0) ==24145== by 0x54A29F1: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0) ==24145== by 0x54A3523: lua_resume (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0) ==24145== by 0x426433: hlua_ctx_resume (hlua.c:1097) ==24145== by 0x42D7F6: hlua_action (hlua.c:6218) ==24145== by 0x43A414: http_req_get_intercept_rule (http_ana.c:3044) ==24145== by 0x43D946: http_process_req_common (http_ana.c:500) ==24145== by 0x457892: process_stream (stream.c:2084) Found while investigating issue #306. A variant of this issue exists since 55da165301b4de213dacf57f1902c2142e867775, which was using the old `chunk` API instead of the `buffer` API thus this patch must be backported to HAProxy 1.6 and higher. (cherry picked from commit 29d2e8aa9abe48539607692ba69a6a5fb4e96ca8) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit f371b834c61394196b3c3cb4c76cabbe80f9e6fe) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit a1ed96379f01943a0989bbcf8ca05e22e6f983f9) [wt: adapt to pre-1.9 chunk API] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 626f412a55941df73b0ccb27a605cc652ff5df0e) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 81f0ad9624f3645c9f09ba09f8ea6b17620fd7d3 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Thu May 23 11:14:21 2019 +0200 BUG/MINOR: lua: Set right direction and flags on new HTTP objects When a LUA HTTP object is created using the current TXN object, it is important to also set the right direction and flags, using ones from the TXN object. This patch may be backported to all supported branches with the lua support. But, it seems to have no impact for now. (cherry picked from commit 256b69a82d9e096a5d67c6ae3685abd50f24193f) [wt: needed for commit c6131b89e] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 6d3277cc2c4af593249885e63b4f05ed94ef9610) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 747fc5ba3c95ab775c7261269b2b9eadef8a8f15) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 4359ff9719122ec2c08df572a2f4132bb96ea216 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Jul 26 16:31:34 2019 +0200 BUG/MINOR: hlua: Only execute functions of HTTP class if the txn is HTTP ready The flag HLUA_TXN_HTTP_RDY was added in the previous commit to know when a function is called for a channel with a valid HTTP message or not. Of course it also depends on the calling direction. In this commit, we allow the execution of functions of the HTTP class only if this flag is set. Nobody seems to use them from an unsupported context (for instance, trying to set an HTTP header from a tcp-request rule). But it remains a bug leading to undefined behaviors or crashes. This patch may be backported to all versions since the 1.6. It depends on the commits "MINOR: hlua: Add a flag on the lua txn to know in which context it can be used" and "MINOR: hlua: Don't set request analyzers on response channel for lua actions". (cherry picked from commit 301eff8e215d5dc7130e1ebacd7cf8da09a4f643) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 2351ca211d655c1be9ef6d62880899102134266d) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 9108f1b3dae52112e70f2f23385360f4f50b172a) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit eb7ba20801e82c60c7002001675a81330b732cd3) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 1debd4697d84efa9eae65059e2e574cfef76816c Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Jul 26 15:09:53 2019 +0200 MINOR: hlua: Add a flag on the lua txn to know in which context it can be used When a lua action or a lua sample fetch is called, a lua transaction is created. It is an entry in the stack containing the class TXN. Thanks to it, we can know the direction (request or response) of the call. But, for some functions, it is also necessary to know if the buffer is "HTTP ready" for the given direction. "HTTP ready" means there is a valid HTTP message in the channel's buffer. So, when a lua action or a lua sample fetch is called, the flag HLUA_TXN_HTTP_RDY is set if it is appropriate. (cherry picked from commit bfab2dddad3ded87617d1e2db54761943d1eb32d) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit ff96b8bd3f85155f65b2b9c9f046fe3e40f630a4) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 9c848275ab7cf38b3bfb60643a02d6fc253b4aa0) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit ae74c810827953320587bd9eae7b5e2e21dbff36) [wt: s/s->hlua/&s->hlua] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 44f786693165fff07048392188fc1e4a7cfff1ca Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Jul 26 14:54:52 2019 +0200 MINOR: hlua: Don't set request analyzers on response channel for lua actions Setting some requests analyzers on the response channel was an old trick to be sure to re-evaluate the request's analyers after the response's ones have been called. It is no more necessary. In fact, this trick was removed in the version 1.8 and backported up to the version 1.6. This patch must be backported to all versions since 1.6 to ease the backports of fixes on the lua code. (cherry picked from commit 51fa358432247fe5d7259d9d8a0e08d49d429c73) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit b22f6501bc9838061472128360e0e55d08cb0bd9) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 7d90cd1390201a72fedc7a80cdfe167eb359e656) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 240e1b9ed89b9e37bc4e33e9b919dd3a3cefa0cb) [wt: pass &s->hlua instead of s->hlua] Signed-off-by: Willy Tarreau <w@1wt.eu> commit bac1463d0108488cf184173c30ce2e3398e1ef9b Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Jul 26 16:17:01 2019 +0200 BUG/MEDIUM: hlua: Check the calling direction in lua functions of the HTTP class It is invalid to manipulate responses from http-request rules or to manipulate requests from http-response rules. When http-request rules are evaluated, the connection to server is not yet established, so there is no response at all. And when http-response rules are evaluated, the request has already been sent to the server. Now, the calling direction is checked. So functions "txn.http:req_*" can now only be called from http-request rules and the functions "txn.http:res_*" can only be called from http-response rules. This issue was reported on Github (#190). This patch must be backported to all versions since the 1.6. (cherry picked from commit 84a6d5bc217a418db8efc4e76a0a32860db2c608) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit dc2ee27c7a1908ca3157a10ad131f13644bcaea3) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit c6131b89e2cdcbbbbb0dae15a44d9c1e15a34a8a) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 8a871bc4492fea33b3dcbff9aeb1f82554d6cdf9) Signed-off-by: Willy Tarreau <w@1wt.eu> commit bb8c71349330ae2047d8ce081d0830909aa52200 Author: Willy Tarreau <w@1wt.eu> Date: Fri Jul 26 15:21:54 2019 +0200 DOC: improve the wording in CONTRIBUTING about how to document a bug fix Insufficiently described bug fixes are still too frequent. It's a real pain to create each new maintenance release, as 3/4 of the time is spent trying to guess what problem a patch fixes, which is already important in order to decide whether to pick the fix or not, but is even more capital in order to write understandable release notes. Christopher rightfully demands that a patch tagged "BUG" MUST ABSOLUTELY describe the problem and why this problem is a bug. Describing the fix is one thing but if the bug is unknown, why would there be a fix ? How can a stable maintainer be convinced to take a fix if its author didn't care about checking whether it was a real bug ? This patch tries to explain a bit better what really needs to appear in the commit message and how to describe a bug. To be backported to all relevant stable versions. (cherry picked from commit 41f638c1eb8167bb473a6c8811d7fd70d7c06e07) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 8de6badd32fb584d60733a6236113edba00f8701) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 04d9e88c7da4a0e516c5442056fc0b99d7637dec) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit d8eb25742a317c0fb58d323b1e683ffe8dcaba01) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 440a7208334106807849d665e22c741344b098c5 Author: Tim Duesterhus <tim@bastelstu.be> Date: Mon Jun 17 16:10:07 2019 +0200 BUG/MEDIUM: compression: Set Vary: Accept-Encoding for compressed responses Make HAProxy set the `Vary: Accept-Encoding` response header if it compressed the server response. Technically the `Vary` header SHOULD also be set for responses that would normally be compressed based off the current configuration, but are not due to a missing or invalid `Accept-Encoding` request header or due to the maximum compression rate being exceeded. Not setting the header in these cases does no real harm, though: An uncompressed response might be returned by a Cache, even if a compressed one could be retrieved from HAProxy. This increases the traffic to the end user if the cache is unable to compress itself, but it saves another roundtrip to HAProxy. see the discussion on the mailing list: https://www.mail-archive.com/haproxy@formilux.org/msg34221.html Message-ID: 20190617121708.GA2964@1wt.eu A small issue remains: The User-Agent is not added to the `Vary` header, despite being relevant to the response. Adding the User-Agent header would make responses effectively uncacheable and it's unlikely to see a Mozilla/4 in the wild in 2019. Add a reg-test to ensure the behaviour as described in this commit message. see issue #121 Should be backported to all branches with compression (i.e. 1.6+). (cherry picked from commit 721d686bd10dc6993859f9026ad907753d1d2064) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit eaf650083924a697cde3379703984c5e7a5ebd41) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 96942d657ec7f29a328a5759558dbaa26d8e3e53) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> [Cf: The patch was updated because there is no HTX in 1.8] (cherry picked from commit a27131f6e1e6c3e16e056915ba5ec2c560051296) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 427c39e14992a79b97757b1c3897939c35922895 Author: Willy Tarreau <w@1wt.eu> Date: Tue Jun 4 16:43:29 2019 +0200 BUG/MEDIUM: vars: make the tcp/http unset-var() action support conditions Patrick Hemmer reported that http-request unset-var(foo) if ... fails to parse. The reason is that it reuses the same parser as "set-var(foo)" which makes a special case of the arguments, supposed to be a sample expression for set-var, but which must not exist for unset-var. Unfortunately the parser finds "if" or "unless" and believes it's an expression. Let's simply drop the test so that the outer rule parser deals with potential extraneous keywords. This should be backported to all versions supporting unset-var(). (cherry picked from commit 4b7531f48b5aa66d11fcee2836c201644bfb6a71) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit a47e745276662db361637914b8558984f091306b) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit a41ac2d710711f3ab91d92415278a73c358aedca) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 5bb6976ddd3345a57c9542edc68feda3f5818166 Author: Willy Tarreau <w@1wt.eu> Date: Tue Jun 4 16:27:36 2019 +0200 BUG/MEDIUM: vars: make sure the scope is always valid when accessing vars Patrick Hemmer reported that a simple tcp rule involving a variable like this is enough to crash haproxy : frontend foo bind :8001 tcp-request session set-var(txn.foo) src The tests on the variables scopes is not strict enough, it needs to always verify if the stream is valid when accessing a req/res/txn variable. This patch does this by adding a new get_vars() function which does the job instead of open-coding all the lookups everywhere. It must be backported to all versions supporting set-var and "tcp-request session" so at least 1.9 and 1.8. (cherry picked from commit f37b140b06b9963dea8adaf5e13b5b57cd219c74) [wt: s/_HA_/HA_/] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 5dcf8515592602ed0d962e365cbb74a3646727c1) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 47133bd99225554519c1d32293e0e5c3db83db30) [wt: significant readjustments due to context changes] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 61967759791179680b1f06917480b437ac4b2dfc Author: Christopher Faulet <cfaulet@haproxy.com> Date: Tue Apr 30 14:08:41 2019 +0200 CLEANUP: config: Don't alter listener->maxaccept when nbproc is set to 1 This patch only removes a useless calculation on listener->maxaccept when nbproc is set to 1. Indeed, the following formula has no effet in such case: listener->maxaccept = (listener->maxaccept + nbproc - 1) / nbproc; This patch may be backported as far as 1.5. (cherry picked from commit 02f3cf19ed803d20aff9294ce7cb732489951ff5) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 14203e3cf9404e57de5e274b453f0fe4f2174924) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit c6e03c1495fa51f9a98ed0bbe3230313c7c7201c) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 794c3f11e9a3dc6b0d8f2586b249f0f9d5bdbe11 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Tue Apr 30 14:03:56 2019 +0200 MINOR: config: Test validity of tune.maxaccept during the config parsing Only -1 and positive integers from 0 to INT_MAX are accepted. An error is triggered during the config parsing for any other values. This patch may be backported to all supported versions. (cherry picked from commit 6b02ab87348090efec73b1dd24f414239669f279) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 2bbc40f8bc9a52ba0d03b25270ac0129cca29bba) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 6e580b6e744011e87c337ebe2c082acfd5ca835a) Signed-off-by: Willy Tarreau <w@1wt.eu> commit f31433177680684e0326a6f2f082329ecd0c1d2c Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Sep 13 09:50:15 2019 +0200 BUG/MINOR: acl: Fix memory leaks when an ACL expression is parsed This only happens during the configuration parsing. First leak is the string representing the last converter parsed, if any. The second one is on the error path, when the allocation of the ACL expression failed. In this case, the sample was not released. This patch fixes the issue #256. It must be backported to all stable versions. (cherry picked from commit 361935aa1e327d2249453eab0b8f0300683f47b2) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 6a4c746b63c89c7d4c5f21d79ceb45207ebb24bb) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit d84d945ac2a3d33044d7d56b8ec709d9e6a0aec3) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit b560440ea26e44a950155d1932c1cd4b4dd7fc00) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 28e7fa508e909a3fd112a10e11a29ce6a88dcb29 Author: Willy Tarreau <w@1wt.eu> Date: Tue Apr 30 11:43:43 2019 +0200 BUG/MAJOR: map/acl: real fix segfault during show map/acl on CLI A previous commit 8d85aa44d ("BUG/MAJOR: map: fix segfault during 'show map/acl' on cli.") was provided to address a concurrency issue between "show acl" and "clear acl" on the CLI. Sadly the code placed there was copy-pasted without changing the element type (which was struct stream in the original code) and not tested since the crash is still present. The reproducer is simple : load a large ACL file (e.g. geolocation addresses), issue "show acl #0" in loops in one window and issue a "clear acl #0" in the other one, haproxy crashes. This fix was also tested with threads enabled and looks good since the locking seems to work correctly in these areas though. It will have to be backported as far as 1.6 since the commit above went that far as well... (cherry picked from commit 49ee3b2f9a9e5d0b8d394938df527aa645ce72b4) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit ac4be10f62ef72962d9cf0e6f2619e1e1c370d62) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 83af1f6b65806982640679823228976deebf5202) [wt: dropped the reload part that was added in 1.8 for threads] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 6257216277fa352b1a179cb0f5f7e9027deea84c Author: Willy Tarreau <w@1wt.eu> Date: Fri Apr 19 11:45:20 2019 +0200 BUG/MINOR: acl: properly detect pattern type SMP_T_ADDR Since 1.6-dev4 with commit b2f8f087f ("MINOR: map: The map can return IPv4 and IPv6"), maps can return both IPv4 and IPv6 addresses, which is represented as SMP_T_ADDR at the output of the map converter. But the ACL parser only checks for either SMP_T_IPV4 or SMP_T_IPV6 and requires to see an explicit matching method specified. Given that it uses the same pattern parser for both address families, it implicitly is also compatible with SMP_T_ADDR, which ought to have been added there. This fix should be backported as far as 1.6. (cherry picked from commit 78c5eec9497e1e60565492bc69581aea439e54cc) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit ce727199a5b1a7c58cce1b0cfe79b91c6c138935) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 4aa6348c04bc854b1dc47227b6931d43e704968d) Signed-off-by: Willy Tarreau <w@1wt.eu> commit a07327c5e1626ffbf1a4d072058091b0ed1fa3fa Author: Willy Tarreau <w@1wt.eu> Date: Fri Apr 19 11:35:22 2019 +0200 BUG/MEDIUM: maps: only try to parse the default value when it's present Maps returning an IP address (e.g. map_str_ip) support an optional default value which must be parsed. Unfortunately the parsing code does not check for this argument's existence and uncondtionally tries to resolve the argument whenever the output is of type address, resulting in segfaults at parsing time when no such argument is provided. This patch adds the appropriate check. This fix may be backported as far as 1.6. (cherry picked from commit aa5801bcaade82ce58b9a70f320b7d0389e444b0) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 0ad6d18945467f4d6defaad619ae49f939770ba2) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 814ca94cbcba61a11485dedf80f6b35c34e4d74b) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 43db3dc67dda759c4b4b575bbbb111200408fc5c Author: Willy Tarreau <w@1wt.eu> Date: Thu Apr 11 14:47:08 2019 +0200 BUG/MEDIUM: pattern: assign pattern IDs after checking the config validity Pavlos Parissis reported an interesting case where some map identifiers were not assigned (appearing as -1 in show map). It turns out that it only happens for log-format expressions parsed in check_config_validity() that involve maps (log-format, use_backend, unique-id-header), as in the sample configuration below : frontend foo bind :8001 unique-id-format %[src,map(addr.lst)] log-format %[src,map(addr.lst)] use_backend %[src,map(addr.lst)] The reason stems from the initial introduction of unique IDs in 1.5 via commit af5a29d5f ("MINOR: pattern: Each pattern is identified by unique id.") : the unique_id assignment was done before calling check_config_validity() so all maps loaded after this call are not properly configured. From what the function does, it seems they will not be able to use a cache, will not have a unique_id assigned and will not be updatable from the CLI. This fix must be backported to all supported versions. (cherry picked from commit 0f93672dfea805268d674c97573711fbff7e0e70) Signed-off-by: William Lallemand <wlallemand@haproxy.org> (cherry picked from commit ba475a5b390f58450756da67dbf54bf063f2dbef) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 3bd00f356783d331deba80de76c989d416e4a52e) [wt: minor context updates, tlskeys updates present] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 626a293b4e71a94876b5283a90432e1e9d7c79a7 Author: Emeric Brun <ebrun@haproxy.com> Date: Tue Apr 2 17:22:01 2019 +0200 BUG/MEDIUM: peers: fix a case where peer session is not cleanly reset on release. The deinit took place in only peer_session_release, but in the a case of a previous call to peer_session_forceshutdown, the session cursors won't be reset, resulting in a bad state for new session of the same peer. For instance, a table definition message could be dropped and so all update messages will be dropped by the remote peer. This patch move the deinit processing directly in the force shutdown funtion. Killed session remains in "ST_END" state but ref on peer was reset to NULL and deinit will be skipped on session release function. The session release continue to assure the deinit for "active" sessions. This patch should be backported on all stable version since proto peers v2. (cherry picked from commit 9ef2ad7844e577b505019695c59284f4a439fc33) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 14831989a081f3944cf891afd56e6d9f6086c3ed) [cf: global variabled connected_peers and active_peers don't exist in 1.8] Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 3bb33335816c1c9549d21bcc14bed29519b938a3) [wt: removed the locking code] Signed-off-by: Willy Tarreau <w@1wt.eu> commit c8901d939b58c9a0abb26fac4cd72a5f5a025c22 Author: Willy Tarreau <w@1wt.eu> Date: Sat Jun 22 08:24:16 2019 +0200 BUILD: makefile: do not rely on shell substitutions to determine git version Solaris's default shell doesn't support substitutions at the beginning or end of variables, which are still used to determine the version based on git. Since we added --abbrev=0 we don't need the last one. And using cut it's trivial to replace the first one, actually simplifying the whole expression. This may be backported to all stable branches. (cherry picked from commit 3c55efb7dd0603023cc33bfce914073945140898) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 68badf8650f9235507bae8b006438ccc072a3093) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit bab732da3a37c5e784e96b2a1fdcc77fc8d02b2d) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit ed8c148ca9bf2a3a92a24a7f8c5dbedffa5bcc61) Signed-off-by: Willy Tarreau <w@1wt.eu> commit e1e793a34f90e99ee30de41168c9ce5cf003f1e4 Author: Willy Tarreau <w@1wt.eu> Date: Sat Jun 22 07:51:02 2019 +0200 BUILD: makefile: use :space: instead of digits to count commits The 'tr' command on Solaris doesn't conform to POSIX and requires brackets around ranges. So the sequence '0-9' is understood as the 3 characters '0', '-', and '9'. This causes tagged versions (those with no commit after the last commit) to be numberred as an empty string, resulting in an error being reported while computing the version number. All implementations support '[:space:]' to delete heading spaces, so let's use this instead. This may be backported to all stable versions. (cherry picked from commit 30a6f6402e385e76870618e3b1950ac989e93612) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit ca3b047b3036897ec0b3c6fb0f2fe01882859ba9) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit bdbab8cd92665ef7d674b39cb80862aa580063f3) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 4862f9e0cb30e8e4bce26584b93721dd880cc5c0) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 35d65c0354805d6692344f519387750b37ebe04e Author: Willy Tarreau <w@1wt.eu> Date: Fri Mar 29 17:17:52 2019 +0100 BUILD: makefile: work around an old bug in GNU make-3.80 GNU make-3.80 fails on the .build_opts target, expecting the closing brace before the first semi-colon in the shell command, it probably uses a more limited parser for dependencies. Actually it appears it's enough to place this command in a variable and reference the variable there. Since it doesn't affect later versions (and the resulting string is always empty anyway), let's apply the minor change to continue to comply with the announced dependencies. This could be backported as far as 1.6. (cherry picked from commit 509a009c5dd06e680bc2fff6ebc45f7f42aaee3e) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 953b732eef96689f2b11bc2768ba05f28feac9a5) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 795773be8c3ddc8380f134adc7e2ccfde2d8469b) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 9b8513d2906928282a0d46e9136a5fa92cc63283 Author: Willy Tarreau <w@1wt.eu> Date: Mon Jul 15 10:57:51 2019 +0200 BUG/MEDIUM: tcp-check: unbreak multiple connect rules again The last connect rule used to be ignored and that was fixed by commit 248f1173f ("BUG/MEDIUM: tcp-check: single connect rule can't detect DOWN servers") during 1.9 development. However this patch went a bit too far by not breaking out of the loop after a pending connect(), resulting in a series of failed connect() to be quickly skipped and only the last one to be taken into account. Technically speaking the series is not exactly skipped, it's just that TCP checks suffer from a design issue which is that there is no distinction between a new rule and this rule's completion in the "connect" rule handling code. As such, when evaluating TCPCHK_ACT_CONNECT a new connection is created regardless of any previous connection in progress, and the previous result is ignored. It seems that this issue is mostly specific to the connect action if we refer to the comments at the top of the function, so it might be possible to durably address it by reworking the connect state. For now this patch does something simpler, it restores the behaviour before the commit above consisting in breaking out of the loop when the connection is in progress and after skipping comment rules. This way we fall back to the default code waiting for completion. This patch must be backported as far as 1.8 since the commit above was backported there. Thanks to Jérôme Magnin for reporting and bisecting this issue. (cherry picked from commit 7df8ca6296e7f2af2cadf05830af54b998e9c196) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 7199e96c8eea193cdf158e03d37ff5a6f4b58190) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 27f14b0f273ec46a9974bcbcf4acb33b7b8d16f7) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 46a8cde865a78ecc864babb1ea8a084dd90a62e8) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 206dfa62d9f272a5ffdc2e19dd79eff76af42a7e Author: Ricardo Nabinger Sanchez <rnsanchez@taghos.com.br> Date: Thu Mar 28 21:42:23 2019 -0300 BUG/MAJOR: checks: segfault during tcpcheck_main When using TCP health checks (tcp-check connect), it is possible to crash with a segfault when, for reasons yet to be understood, the protocol family is unknown. In the function tcpcheck_main(), proto is dereferenced without a prior test in case it is NULL, leading to the segfault during proto->connect dereference. The line has been unmodified since it was introduced, in commit 69e273f3fcfbfb9cc0fb5a09668faad66cfbd36b. This was the only use of proto (or more specifically, the return of protocol_by_family()) that was unprotected; all other callsites perform the test for a NULL pointer. This patch should be backported to 1.9, 1.8, 1.7, and 1.6. (cherry picked from commit 4bccea98912c74fa42c665ec25e417c2cca4eee7) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 2cefb36087f240b66b2aa4824a317ef5f9b85e68) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit ed3951cf6d9c7846fc780042fdddc194dda47c8d) Signed-off-by: Willy Tarreau <w@1wt.eu> commit cd7d4d6bb94fbf670a422ba44695df6f076ecd14 Author: Willy Tarreau <w@1wt.eu> Date: Thu Sep 12 14:01:40 2019 +0200 BUG/MEDIUM: http: also reject messages where "chunked" is missing from transfer-enoding Nathan Davison (@ndavison) reported that in legacy mode we don't correctly reject requests or responses featuring a transfer-encoding header missing the "chunked" value. As mandated in the protocol spec, the test verifies that "chunked" is the last one, but only does so when it is present. As such, "transfer-encoding: foobar" is not rejected, only "transfer-encoding: chunked, foobar" will be. The impact is limited, but if combined with "http-reuse always", it could be used as a help to construct a content smuggling attack against a vulnerable component employing a lenient parser which would ignore the content-length header as soon as it sees a transfer-encoding one, without even parsing it. In this case haproxy would fail to protect it. The fix consists in completing the existing checks to verify that "chunked" was present if any "transfer-encoding" header was met, otherwise either reject the request message or make the response end on a close. This fix is only for 2.0 and older versions as legacy mode was removed from 2.1. It should be backported to all maintained versions. (cherry picked from commit 196a7df44d8129d1adc795da020b722614d6a581) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 5513fcaa601dd344be548430fc1760dbedebf4f2) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 3bd4bbdb9f54c18856aeb66b4b9f4a698973d3d3) Signed-off-by: Willy Tarreau <w@1wt.eu> commit a628e9cadd02bab86715cca3e3ecba273dffdd20 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Wed Sep 4 09:39:42 2019 +0200 BUG/MEDIUM: proto-http: Always start the parsing if there is no outgoing data When we are waiting for a request or a response, if the channel's buffer is not rewritable (the reservce is not fully free), nothing is done and we wait to have a rewritable buffer. It was an old implicit assumption of HTTP analyzers. On old versions, at this stage, if a buffer was not rewritable, it meant some outgoing data were pending to be sent. On recent versions, it should not happen because all outgoing data are sent before starting the analysis of the next transaction. But the applets may be lead to use the reserve. For instance, the cache applet adds the header "Age" to cached responses. It may use the reserve to do so if the size of the response headers is huge. So, in such case, the implicit assumption of a no rewritable buffer because of output data is wrong. But the message analysis remains blocked, sometime infinitely depending on circumstances. To fix the bug and to avoid any ambiguity, we now also check if there are some outgoing data when the buffer is not rewritable to postpone the message analysis. In fact, this code may probably be removed because it should never happen. But I prefer to be conservative here and don't introduce a bug because of an unknown/unexpected hidden corner case. Anyway, it is not a big deal because all legacy HTTP code is removed in the 2.1. This is a direct commit to the 2.0 branch, as the problem doesn't exist in master. It must be backported at least to 1.9 and 1.8 because of the cache. But it may be also backported to all stable versions. This patch should partly fix the github issue #233. (cherry picked from commit 3d36d4e720a76a12c7f6cd64c7971237d7d92d78) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit d09d66853a3700d2b9261c02e1027d13b4420f5b) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit ba3abeda541ffe93fd528e9bc8701d4faadfb680) Signed-off-by: Willy Tarreau <w@1wt.eu> commit e9b20c9a6f5dcc4ed6d4644d1e12577437e22c12 Author: Willy Tarreau <w@1wt.eu> Date: Tue Jun 11 16:01:56 2019 +0200 BUG/MINOR: http-rules: mention "deny_status" for "deny" in the error message The error message indicating an unknown keyword on an http-request rule doesn't mention the "deny_status" option which comes with the "deny" rule, this is particularly confusing. This can be backported to all versions supporting this option. (cherry picked from commit 5abdc760c99a0011607f2cc97e199ef6ce0e8486) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 4e66beaf2a32bd835db9de61a60318648258f649) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> [Cf: The fix was applied on src/proto_http.c because, in 1.8, the file src/http_rules.c does not exist.] (cherry picked from commit 8a74cad9b7fe8b9e1f5b140d90360ece838a878e) [wt: s/ha_alert/Alert] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 79903fbc04691b7c951af449314b0791aa274b69 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon Apr 29 13:12:02 2019 +0200 BUG/MINOR: http: Call stream_inc_be_http_req_ctr() only one time per request The function stream_inc_be_http_req_ctr() is called at the beginning of the analysers AN_REQ_HTTP_PROCESS_FE/BE. It as an effect only on the backend. But we must be careful to call it only once. If the processing of HTTP rules is interrupted in the middle, when the analyser is resumed, we must not call it again. Otherwise, the tracked counters of the backend are incremented several times. This bug was reported in github. See issue #74. This fix should be backported as far as 1.6. (cherry picked from commit 1907ccc2f75b78ace1ee4acdfc60d48a76e3decd) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 319921866ea9ecc46215fea5679abc8efdfcbea5) [cf: HT…
jiangwenyuan
pushed a commit
that referenced
this issue
Feb 13, 2020
Squashed commit of the following: commit 95d7ebea5a0baa464ed3b114da925e312d617e18 Author: Willy Tarreau <w@1wt.eu> Date: Fri Oct 25 08:45:57 2019 +0200 [RELEASE] Released version 1.8.22 Released version 1.8.22 with the following main changes : - BUILD/MINOR: stream: avoid a build warning with threads disabled - BUG/MINOR: haproxy: fix rule->file memory leak - MINOR: connection: add new function conn_is_back() - BUG/MEDIUM: ssl: Use the early_data API the right way. - BUG/MEDIUM: checks: make sure the warmup task takes the server lock - BUG/MINOR: logs/threads: properly split the log area upon startup - MINOR: doc: Document allow-0rtt on the server line. - BUG/MEDIUM: spoe: Be sure the sample is found before setting its context - DOC: fixed typo in management.txt - BUG/MINOR: mworker: disable SIGPROF on re-exec - BUG/MEDIUM: listener/threads: fix an AB/BA locking issue in delete_listener() - BUG/MEDIUM: proto-http: Always start the parsing if there is no outgoing data - BUG/MEDIUM: http: also reject messages where "chunked" is missing from transfer-enoding - BUG/MINOR: filters: Properly set the HTTP status code on analysis error - BUG/MINOR: acl: Fix memory leaks when an ACL expression is parsed - BUG/MEDIUM: check/threads: make external checks run exclusively on thread 1 - BUG/MEDIUM: namespace: close open namespaces during soft shutdown - BUG/MAJOR: mux_h2: Don't consume more payload than received for skipped frames - MINOR: tools: implement my_flsl() - BUG/MEDIUM: spoe: Use a different engine-id per process - DOC: Fix documentation about the cli command to get resolver stats - BUG/MEDIUM: namespace: fix fd leak in master-worker mode - BUG/MINOR: lua: Properly initialize the buffer's fields for string samples in hlua_lua2(smp|arg) - BUG/MEDIUM: cache: make sure not to cache requests with absolute-uri - DOC: clarify some points around http-send-name-header's behavior - MINOR: stats: mention in the help message support for "json" and "typed" - BUG/MINOR: ssl: free the sni_keytype nodes - BUG/MINOR: chunk: Fix tests on the chunk size in functions copying data - BUG/MINOR: WURFL: fix send_log() function arguments - BUG/MINOR: tcp: Don't alter counters returned by tcp info fetchers - BUG/MINOR: ssl: abort on sni allocation failure - BUG/MINOR: ssl: abort on sni_keytypes allocation failure - CLEANUP: ssl: make ssl_sock_load_cert*() return real error codes - CLEANUP: ssl: make ssl_sock_put_ckch_into_ctx handle errcode/warn - CLEANUP: ssl: make ssl_sock_load_dh_params handle errcode/warn - CLEANUP: bind: handle warning label on bind keywords parsing. - BUG/MEDIUM: ssl: 'tune.ssl.default-dh-param' value ignored with openssl > 1.1.1 - BUG/MINOR: mworker/ssl: close OpenSSL FDs on reload - BUILD: ssl: fix again a libressl build failure after the openssl FD leak fix - BUG/MINOR: mworker/ssl: close openssl FDs unconditionally - BUG/MINOR: ssl: Fix fd leak on error path when a TLS ticket keys file is parsed - BUG/MINOR: stick-table: Never exceed (MAX_SESS_STKCTR-1) when fetching a stkctr - BUG/MINOR: sample: Make the `field` converter compatible with `-m found` - BUG/MINOR: ssl: fix memcpy overlap without consequences. - BUG/MINOR: stick-table: fix an incorrect 32 to 64 bit key conversion - BUG/MEDIUM: pattern: make the pattern LRU cache thread-local and lockless commit 6e9efc2d8de459b921575d1b635ead375dfb6ad5 Author: Willy Tarreau <w@1wt.eu> Date: Wed Oct 23 06:59:31 2019 +0200 BUG/MEDIUM: pattern: make the pattern LRU cache thread-local and lockless As reported in issue #335, a lot of contention happens on the PATLRU lock when performing expensive regex lookups. This is absurd since the purpose of the LRU cache was to have a fast cache for expressions, thus the cache must not be shared between threads and must remain lockless. This commit makes the LRU cache thread-local and gets rid of the PATLRU lock. A test with 7 threads on 4 cores climbed from 67kH/s to 369kH/s, or a scalability factor of 5.5. Given the huge performance difference and the regression caused to users migrating from processes to threads, this should be backported at least to 2.0. Thanks to Brian Diekelman for his detailed report about this regression. (cherry picked from commit 403bfbb130f9fb31e52d441ebc1f8227f6883c22) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 7fdd81c43fd75349d4496649d2176ad258e55a4b) [wt: s/REGISTER_PER_THREAD_ALLOC/REGISTER_PER_THREAD_INIT/] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit d8979fee8d49b444c736051fe759f8ae3aa2c997) [wt: use hap_register_per_thread_{init,deinit}() instead of the macros] Signed-off-by: Willy Tarreau <w@1wt.eu> commit ff19ac8ca311febb29269ec47db7879b7ad3d10e Author: Willy Tarreau <w@1wt.eu> Date: Wed Oct 23 06:21:05 2019 +0200 BUG/MINOR: stick-table: fix an incorrect 32 to 64 bit key conversion As reported in issue #331, the code used to cast a 32-bit to a 64-bit stick-table key is wrong. It only copies the 32 lower bits in place on little endian machines or overwrites the 32 higher ones on big endian machines. It ought to simply remove the wrong cast dereference. This bug was introduced when changing stick table keys to samples in 1.6-dev4 by commit bc8c404449 ("MAJOR: stick-tables: use sample types in place of dedicated types") so it the fix must be backported as far as 1.6. (cherry picked from commit 28c63c15f572a1afeabfdada6a0a4f4d023d05fc) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 6fe22ed08a642d27f1a228c6f3b7f9f0dd0ea4cd) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 5c779f766524d5d6a754710a5a8a4d3d3013d2e6) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 0241d465f2a7f7a28682f936f0c87061b18c8202 Author: Emeric Brun <ebrun@haproxy.com> Date: Tue Oct 8 18:27:37 2019 +0200 BUG/MINOR: ssl: fix memcpy overlap without consequences. A trick is used to set SESSION_ID, and SESSION_ID_CONTEXT lengths to 0 and avoid ASN1 encoding of these values. There is no specific function to set the length of those parameters to 0 so we fake this calling these function to a different value with the same buffer but a length to zero. But those functions don't seem to check the length of zero before performing a memcpy of length zero but with src and dst buf on the same pointer, causing valgrind to bark. So the code was re-work to pass them different pointers even if buffer content is un-used. In a second time, reseting value, a memcpy overlap happened on the SESSION_ID_CONTEXT. It was re-worked and this is now reset using the constant global value SHCTX_APPNAME which is a different pointer with the same content. This patch should be backported in every version since ssl support was added to haproxy if we want valgrind to shut up. This is tracked in github issue #56. (cherry picked from commit eb46965bbb21291aab75ae88f033d9c9bab4a785) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 7b34de3f4ccb3db391a416ef1796cc0a35b11712) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 675702e22476e63495a1087f8303e889c1ab47a2) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 6a8e7f238799b0f5928e0b2ddc3bcf3be5bed2bf Author: Tim Duesterhus <tim@bastelstu.be> Date: Wed Oct 16 15:11:15 2019 +0200 BUG/MINOR: sample: Make the `field` converter compatible with `-m found` Previously an expression like: path,field(2,/) -m found always returned `true`. Bug exists since the `field` converter exists. That is: f399b0debfc6c7dc17c6ad503885c911493add56 The fix should be backported to 1.6+. (cherry picked from commit 4381d26edc03faa46401eb0fe82fd7be84be14fd) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 4fa9857b3dc57703c99982a140df5d8119351262) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit d513ff45727e0f1ad1b768723d073ed1b19acd86) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 50bd8d6987bef6d9e2719cdaeeb76bf5cc1674c7 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon Oct 21 10:53:34 2019 +0200 BUG/MINOR: stick-table: Never exceed (MAX_SESS_STKCTR-1) when fetching a stkctr When a stick counter is fetched, it is important that the requested counter does not exceed (MAX_SESS_STKCTR -1). Actually, there is no bug with a default build because, by construction, MAX_SESS_STKCTR is defined to 3 and we know that we never exceed the max value. scN_* sample fetches are numbered from 0 to 2. For other sample fetches, the value is tested. But there is a bug if MAX_SESS_STKCTR is set to a lower value. For instance 1. In this case the counters sc1_* and sc2_* may be undefined. This patch fixes the issue #330. It must be backported as far as 1.7. (cherry picked from commit a9fa88a1eac9bd0ad2cfb761c4b69fd500a1b056) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 33c7e12479cb9bdc2e7e3783fda78a1b2c242363) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit e5c0021542f17019148f0abb4a93b254f86226f3) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 8af0b88c05ddb41340a2b9213a5f701d34fc6df3 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon Oct 21 09:55:49 2019 +0200 BUG/MINOR: ssl: Fix fd leak on error path when a TLS ticket keys file is parsed When an error occurred in the function bind_parse_tls_ticket_keys(), during the configuration parsing, the opened file is not always closed. To fix the bug, all errors are catched at the same place, where all ressources are released. This patch fixes the bug #325. It must be backported as far as 1.7. (cherry picked from commit e566f3db11e781572382e9bfff088a26dcdb75c5) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 2bbc80ded1bc90dbf406e255917a1aa59c52902c) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit dd68cbd4797ef27040d0fae5db6e7a7da8ca0952) [wt: context adjustments around the different use of base64dec()] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 9b1392e28d0408e4fe70b1926d3526f5c870326b Author: William Lallemand <wlallemand@haproxy.com> Date: Tue Oct 15 14:04:08 2019 +0200 BUG/MINOR: mworker/ssl: close openssl FDs unconditionally Patch 56996da ("BUG/MINOR: mworker/ssl: close OpenSSL FDs on reload") fixes a issue where the /dev/random FD was leaked by OpenSSL upon a reload in master worker mode. Indeed the FD was not flagged with CLOEXEC. The fix was checking if ssl_used_frontend or ssl_used_backend were set to close the FD. This is wrong, indeed the lua init code creates an SSL server without increasing the backend value, so the deinit is never done when you don't use SSL in your configuration. To reproduce the problem you just need to build haproxy with openssl and lua with an openssl which does not use the getrandom() syscall. No openssl nor lua configuration are required for haproxy. This patch must be backported as far as 1.8. Fix issue #314. (cherry picked from commit 5fdb5b36e1e0bef9b8a79c3550bd7a8751bac396) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 1b7c4dc3fc509a40debbf4ffa6342f56c7046e83) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit faa668d2db81cc1aab4f7c4c2a81642991acd9e3) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 41193068349a02eff415fbd76a1f07b2b0fd06f8 Author: Willy Tarreau <w@1wt.eu> Date: Thu May 9 13:53:28 2019 +0200 BUILD: ssl: fix again a libressl build failure after the openssl FD leak fix As with every single OpenSSL fix, LibreSSL build broke again, this time after commit 56996dabe ("BUG/MINOR: mworker/ssl: close OpenSSL FDs on reload"). A definitive solution will have to be found quickly. For now, let's exclude libressl from the version test. This patch must be backported to 1.9 since the fix above was already backported there. (cherry picked from commit affd1b980aa03be038e2d4504ec8b0bad7ea253d) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 35b44dae4b485c50fe6afd73f02be80179a05245) Signed-off-by: Willy Tarreau <w@1wt.eu> commit df0eb975bef0a015e9b0925d6991a9b6c201169d Author: Rob Allen <robert.allen1@ibm.com> Date: Fri May 3 09:11:32 2019 +0100 BUG/MINOR: mworker/ssl: close OpenSSL FDs on reload From OpenSSL 1.1.1, the default behaviour is to maintain open FDs to any random devices that get used by the random number library. As a result, those FDs leak when the master re-execs on reload; since those FDs are not marked FD_CLOEXEC or O_CLOEXEC, they also get inherited by children. Eventually both master and children run out of FDs. OpenSSL 1.1.1 introduces a new function to control whether the random devices are kept open. When clearing the keep-open flag, it also closes any currently open FDs, so it can be used to clean-up open FDs too. Therefore, a call to this function is made in mworker_reload prior to re-exec. The call is guarded by whether SSL is in use, because it will cause initialisation of the OpenSSL random number library if that has not already been done. This should be backported to 1.9 and 1.8. (cherry picked from commit 56996dabe67b484b7c0e90192539c57e60483751) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 4810470596c5413fe82672d42d19db34bb599ec3) [wt: placed early in mworker_reload() as code differs a lot] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 19dd0431b06019d5cbd253662822b15412f67144 Author: Emeric Brun <ebrun@haproxy.com> Date: Thu Oct 17 14:53:03 2019 +0200 BUG/MEDIUM: ssl: 'tune.ssl.default-dh-param' value ignored with openssl > 1.1.1 If openssl 1.1.1 is used, c2aae74f0 commit mistakenly enables DH automatic feature from openssl instead of ECDH automatic feature. There is no impact for the ECDH one because the feature is always enabled for that version. But doing this, the 'tune.ssl.default-dh-param' was completely ignored for DH parameters. This patch fix the bug calling 'SSL_CTX_set_ecdh_auto' instead of 'SSL_CTX_set_dh_auto'. Currently some users may use a 2048 DH bits parameter, thinking they're using a 1024 bits one. Doing this, they may experience performance issue on light hardware. This patch warns the user if haproxy fails to configure the given DH parameter. In this case and if openssl version is > 1.1.0, haproxy will let openssl to automatically choose a default DH parameter. For other openssl versions, the DH ciphers won't be usable. A commonly case of failure is due to the security level of openssl.cnf which could refuse a 1024 bits DH parameter for a 2048 bits key: $ cat /etc/ssl/openssl.cnf ... [system_default_sect] MinProtocol = TLSv1 CipherString = DEFAULT@SECLEVEL=2 This should be backport into any branch containing the commit c2aae74f0. It requires all or part of the previous CLEANUP series. This addresses github issue #324. (cherry picked from commit 6624a90a9ac2edb947a8c70fa6a8a283449750c6) Signed-off-by: Emeric Brun <ebrun@haproxy.com> (cherry picked from commit d6de151248603b357565ae52fe92440e66c1177c) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 89e61b9ad4db1097a3d53b83d914f6b0b8edc4ee) Signed-off-by: Willy Tarreau <w@1wt.eu> commit df1712d04c997faa192bcc0d26fd9af5e3fec313 Author: Emeric Brun <ebrun@haproxy.com> Date: Thu Oct 17 16:45:56 2019 +0200 CLEANUP: bind: handle warning label on bind keywords parsing. All bind keyword parsing message were show as alerts. With this patch if the message is flagged only with ERR_WARN and not ERR_ALERT it will show a label [WARNING] and not [ALERT]. (cherry picked from commit 0655c9b22213a0f5716183106d86a995e672d19b) Signed-off-by: Emeric Brun <ebrun@haproxy.com> (cherry picked from commit df6dd890fd1167446326e99a816b9ba7ac86329f) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit c6a03f45f3ba27a47e7a7a62d6e40ee269e0f50d) [wt: code is in cfgparse.c] Signed-off-by: Willy Tarreau <w@1wt.eu> commit b1e3ee6f214d82ebe98140f577777b4c47d88084 Author: Emeric Brun <ebrun@haproxy.com> Date: Thu Oct 17 13:27:40 2019 +0200 CLEANUP: ssl: make ssl_sock_load_dh_params handle errcode/warn ssl_sock_load_dh_params used to return >0 or -1 to indicate success or failure. Make it return a set of ERR_* instead so that its callers can transparently report its status. Given that its callers only used to know about ERR_ALERT | ERR_FATAL, this is the only code returned for now. An error message was added in the case of failure and the comment was updated. (cherry picked from commit 7a88336cf83cd1592fb8e7bc456d72c00c2934e4) Signed-off-by: Emeric Brun <ebrun@haproxy.com> (cherry picked from commit cfc1afe9f21ec27612ed4ad84c4a066c68ca24af) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit e740508a0dd5e30433a86333b874a0833f989e17) [wt: context adjustments] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 82b00a11b298a497b4ca93a3f3bf3c7f1399ebc2 Author: Emeric Brun <ebrun@haproxy.com> Date: Thu Oct 17 13:25:14 2019 +0200 CLEANUP: ssl: make ssl_sock_put_ckch_into_ctx handle errcode/warn ssl_sock_put_ckch_into_ctx used to return 0 or >0 to indicate success or failure. Make it return a set of ERR_* instead so that its callers can transparently report its status. Given that its callers only used to know about ERR_ALERT | ERR_FATAL, this is the only code returned for now. And a comment was updated. (cherry picked from commit a96b582d0eaf1a7a9b21c71b8eda2965f74699d4) Signed-off-by: Emeric Brun <ebrun@haproxy.com> (cherry picked from commit 394701dc80ac9d429b12d405973fb30c348b81f3) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit ff4b8fd98b5763ba73b7bf1abb6fd9f8429fb2cc) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 5b673a658fb1a0a42dbe948b413fceeff1af0642 Author: Willy Tarreau <w@1wt.eu> Date: Wed Oct 16 16:42:19 2019 +0200 CLEANUP: ssl: make ssl_sock_load_cert*() return real error codes These functions were returning only 0 or 1 to mention success or error, and made it impossible to return a warning. Let's make them return error codes from ERR_* and map all errors to ERR_ALERT|ERR_FATAL for now since this is the only code that was set on non-zero return value. In addition some missing comments were added or adjusted around the functions' return values. (cherry picked from commit bbc91965bf4bc7e08c5a9b93fdfa28a64c0949d3) [EBR: also include a part of 054563de1] Signed-off-by: Emeric Brun <ebrun@haproxy.com> (cherry picked from commit b131c870f9fbb5553d8970bb039609f97e1cc6e6) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit d0dd39c5166a40430b8fe0e1645ccb9f54560fa6) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 072124d62e9b8166fd7ce5968885ec1f5ef96f74 Author: William Lallemand <wlallemand@haproxy.com> Date: Fri Oct 4 17:36:55 2019 +0200 BUG/MINOR: ssl: abort on sni_keytypes allocation failure The ssl_sock_populate_sni_keytypes_hplr() function does not return an error upon an allocation failure. The process would probably crash during the configuration parsing if the allocation fail since it tries to copy some data in the allocated memory. This patch could be backported as far as 1.5. (cherry picked from commit 28a8fce485a94b636f6905134509c1150690b60f) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 4801c70bead696bed077fd71a55f6ff35bb6f9f5) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit e697d6d366ee13b7645c0886c97d912242bff87d) [wt: context] Signed-off-by: Willy Tarreau <w@1wt.eu> commit c854d49877e2739cf167dae2d2b62f55f2075352 Author: William Lallemand <wlallemand@haproxy.com> Date: Thu Oct 3 23:46:33 2019 +0200 BUG/MINOR: ssl: abort on sni allocation failure The ssl_sock_add_cert_sni() function never return an error when a sni_ctx allocation fail. It silently ignores the problem and continues to try to allocate other snis. It is unlikely that a sni allocation will succeed after one failure and start a configuration without all the snis. But to avoid any problem we return a -1 upon an sni allocation error and stop the configuration parsing. This patch must be backported in every version supporting the crt-list sni filters. (as far as 1.5) (cherry picked from commit fe49bb3d0c046628d67d57da15a7034cc2230432) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> [Cf: slightly adapted for 2.0] (cherry picked from commit 24e292c1054616e06b1025441ed7a0a59171d108) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 63e63cb341cfb7607c83a085700ac9bf8fe14a3a) [wt: minor adjustments. Note: ssl_sock_add_cert_sni()'s return values are not documented, only did minor validity checks, at least it builds] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 7abec34de2e189544d17f6dbfef7533d9b7be764 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Thu Oct 17 14:40:48 2019 +0200 BUG/MINOR: tcp: Don't alter counters returned by tcp info fetchers There are 2 kinds of tcp info fetchers. Those returning a time value (fc_rtt and fc_rttval) and those returning a counter (fc_unacked, fc_sacked, fc_retrans, fc_fackets, fc_lost, fc_reordering). Because of a bug, the counters were handled as time values, and by default, were divided by 1000 (because of an invalid conversion from us to ms). To work around this bug and have the right value, the argument "us" had to be specified. So now, tcp info fetchers returning a counter don't support any argument anymore. To not break old configurations, if an argument is provided, it is ignored and a warning is emitted during the configuration parsing. In addition, parameter validiation is now performed during the configuration parsing. This patch must be backported as far as 1.7. (cherry picked from commit ba0c53ef71cd7d2b344de318742d0ef239fd34e4) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 297df1860c6d09c7edde1dd6b0bd4f9758600ff3) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 1226abb019fb4e925aa387710ba125a8e2e18d35) [wt: adjusted context and pre-1.9 buffer API] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 866fbea0c3067387b4c2c0a5d01c838551456fdb Author: Miroslav Zagorac <mzagorac@haproxy.com> Date: Mon Oct 14 17:15:56 2019 +0200 BUG/MINOR: WURFL: fix send_log() function arguments If the user agent data contains text that has special characters that are used to format the output from the vfprintf() function, haproxy crashes. String "%s %s %s" may be used as an example. % curl -A "%s %s %s" localhost:10080/index.html curl: (52) Empty reply from server haproxy log: 00000000:WURFL-test.clireq[00c7:ffffffff]: GET /index.html HTTP/1.1 00000000:WURFL-test.clihdr[00c7:ffffffff]: host: localhost:10080 00000000:WURFL-test.clihdr[00c7:ffffffff]: user-agent: %s %s %s 00000000:WURFL-test.clihdr[00c7:ffffffff]: accept: */* segmentation fault (core dumped) gdb 'where' output: #0 strlen () at ../sysdeps/x86_64/strlen.S:106 #1 0x00007f7c014a8da8 in _IO_vfprintf_internal (s=s@entry=0x7ffc808fe750, format=<optimized out>, format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n", ap=ap@entry=0x7ffc808fe8b8) at vfprintf.c:1637 #2 0x00007f7c014cfe89 in _IO_vsnprintf ( string=0x55cb772c34e0 "WURFL: retrieve header request returns [(null) %s %s %s B,w\313U", maxlen=<optimized out>, format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n", args=args@entry=0x7ffc808fe8b8) at vsnprintf.c:114 #3 0x000055cb758f898f in send_log (p=p@entry=0x0, level=level@entry=5, format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n") at src/log.c:1477 #4 0x000055cb75845e0b in ha_wurfl_log ( message=message@entry=0x55cb75989460 "WURFL: retrieve header request returns [%s]\n") at src/wurfl.c:47 #5 0x000055cb7584614a in ha_wurfl_retrieve_header (header_name=<optimized out>, wh=0x7ffc808fec70) at src/wurfl.c:763 In case WURFL (actually HAProxy) is not compiled with debug option enabled (-DWURFL_DEBUG), this bug does not come to light. This patch could be backported in every version supporting the ScientiaMobile's WURFL. (as far as 1.7) (cherry picked from commit f0eb3739ac5460016455cd606d856e7bd2b142fb) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 5e1c1468789d80213325054b6fc1dbd1c70d7776) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 9a5cba43d2da5cb80791868e74eed8b0b9f11148) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 117116705ef90863c515593895c29ed0a841c3ca Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon Oct 14 11:29:48 2019 +0200 BUG/MINOR: chunk: Fix tests on the chunk size in functions copying data When raw data are copied or appended in a chunk, the result must not exceed the chunk size but it can reach it. Unlike functions to copy or append a string, there is no terminating null byte. This patch must be backported as far as 1.8. Note in 1.8, the functions chunk_cpy() and chunk_cat() don't exist. (cherry picked from commit 48fa033f2809af265c230a7c7cf86413b7f9909b) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit da0889df9f33c2a8585e95c95db0f81a80dcc40c) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 882fc574d7d7c468afb2d575501fb6132f4e0193) [wt: dropped chunk_{cpy,cat}; ctx adjustments] Signed-off-by: Willy Tarreau <w@1wt.eu> commit e6f11545eb58aeb131b3de1b1651273c62c8b9fc Author: William Lallemand <wlallemand@haproxy.com> Date: Fri Oct 4 17:24:39 2019 +0200 BUG/MINOR: ssl: free the sni_keytype nodes This patch frees the sni_keytype nodes once the sni_ctxs have been allocated in ssl_sock_load_multi_ckchn(); Could be backported in every version using the multi-cert SSL bundles. (cherry picked from commit 8ed5b965872e3b0cd6605f37bc8fe9f2819ce03c) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit e92c030dfcb298daa7175a789a2fdea42a4784c5) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit eb5e2b9be62454d927532a135b0519d5e232edd4) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 40e736d89161344c284f9748accb6837112635cc Author: Willy Tarreau <w@1wt.eu> Date: Wed Oct 9 07:19:02 2019 +0200 MINOR: stats: mention in the help message support for "json" and "typed" Both "show info" and "show stat" support the "typed" output format and the "json" output format. I just never can remind them, which is an indication that some help is missing. (cherry picked from commit 6103836315fd31418e1a09e820dfaf1cdd0abd98) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit e9038438c813a6ff028ded6c4250fd2269c1a215) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit df7f897ba901686ca0f5833ce39dab498823615b) Signed-off-by: Willy Tarreau <w@1wt.eu> commit aea659f42bd5510ff37c7d59e9f80e5fc7b240d5 Author: Willy Tarreau <w@1wt.eu> Date: Mon Oct 7 14:58:02 2019 +0200 DOC: clarify some points around http-send-name-header's behavior The directive causes existing an header to be removed, which is not explicitly mentioned though already being relied on, and also mention the fast that it should not be used to modify transport level headers and that doing it on Host is more than border-line and definitely not a supported long-term option eventhough it currently works. (cherry picked from commit 81bef7e89993c963d934eab21cca744cb6a6cb03) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit e0e32794c781c4b0798ee7d9887e5d6c5d8b6405) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 4e2f7e359458e017cd724b558540d476339f07ed) Signed-off-by: Willy Tarreau <w@1wt.eu> commit de358a781fa4ce80000239e698e518685bbcc579 Author: Willy Tarreau <w@1wt.eu> Date: Mon Oct 7 14:06:34 2019 +0200 BUG/MEDIUM: cache: make sure not to cache requests with absolute-uri If a request contains an absolute URI and gets its Host header field rewritten, or just the request's URI without touching the Host header field, it can lead to different Host and authority parts. The cache will always concatenate the Host and the path while a server behind would instead ignore the Host and use the authority found in the URI, leading to incorrect content possibly being cached. Let's simply refrain from caching absolute requests for now, which also matches what the comment at the top of the function says. Later we can improve this by having a special handling of the authority. This should be backported as far as 1.8. (cherry picked from commit 22c6107dba1127a1e6d204dc2a6da63c09f2d934) [wt: context; added the legacy-mode version as well] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit e77564496d252762ecb1b25022774f134e90c7ac) [wt: context adjustments; s/http_get_stline/http_find_stline] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 436d6d8815b15def141270af3de1fbf1c175d803) [wt: implemented differently due to pre-1.9 HTTP API] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 626f412a55941df73b0ccb27a605cc652ff5df0e Author: Tim Duesterhus <tim@bastelstu.be> Date: Sun Sep 29 23:03:07 2019 +0200 BUG/MINOR: lua: Properly initialize the buffer's fields for string samples in hlua_lua2(smp|arg) `size` is used in conditional jumps and valgrind complains: ==24145== Conditional jump or move depends on uninitialised value(s) ==24145== at 0x4B3028: smp_is_safe (sample.h:98) ==24145== by 0x4B3028: smp_make_safe (sample.h:125) ==24145== by 0x4B3028: smp_to_stkey (stick_table.c:936) ==24145== by 0x4B3F2A: sample_conv_in_table (stick_table.c:1113) ==24145== by 0x420AD4: hlua_run_sample_conv (hlua.c:3418) ==24145== by 0x54A308F: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0) ==24145== by 0x54AFEFC: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0) ==24145== by 0x54A29F1: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0) ==24145== by 0x54A3523: lua_resume (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0) ==24145== by 0x426433: hlua_ctx_resume (hlua.c:1097) ==24145== by 0x42D7F6: hlua_action (hlua.c:6218) ==24145== by 0x43A414: http_req_get_intercept_rule (http_ana.c:3044) ==24145== by 0x43D946: http_process_req_common (http_ana.c:500) ==24145== by 0x457892: process_stream (stream.c:2084) Found while investigating issue #306. A variant of this issue exists since 55da165301b4de213dacf57f1902c2142e867775, which was using the old `chunk` API instead of the `buffer` API thus this patch must be backported to HAProxy 1.6 and higher. (cherry picked from commit 29d2e8aa9abe48539607692ba69a6a5fb4e96ca8) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit f371b834c61394196b3c3cb4c76cabbe80f9e6fe) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit a1ed96379f01943a0989bbcf8ca05e22e6f983f9) [wt: adapt to pre-1.9 chunk API] Signed-off-by: Willy Tarreau <w@1wt.eu> commit c22014f76d02d8a38abd699d9793d7e8727ecbe6 Author: Krisztián Kovács (kkovacs) <Krisztian.Kovacs@oneidentity.com> Date: Fri Sep 20 14:48:19 2019 +0000 BUG/MEDIUM: namespace: fix fd leak in master-worker mode When namespaces are used in the configuration, the respective namespace handles are opened during config parsing and stored in an ebtree for lookup later. Unfortunately, when the master process re-execs itself these file descriptors were not closed, effectively leaking the fds and preventing destruction of namespaces no longer present in the configuration. This change fixes this issue by opening the namespace file handles as close-on-exec, making sure that they will be closed during re-exec. (cherry picked from commit 538aa7168fca1adf2ecd0aa4a47e6b8856275f55) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 59af61a19493ccc50e3815d84c9323762cf28fcd) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit a7f2abde42a04faba327b552d38749fe001e22b5) [wt: adjusted to old buffer API] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 085b1eb5e555898e0316a365e37c8b546c91fe1f Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Sep 27 10:45:47 2019 +0200 DOC: Fix documentation about the cli command to get resolver stats In the management guide, this command was still referenced as "show stat resolvers" instead of "show resolvers". The cli command was fixed by the commit ff88efbd7 ("BUG/MINOR: dns: Fix CLI keyword declaration"). This patch fixes the issue #296. It can be backported as fas as 1.7. (cherry picked from commit 78c430616552e024fc1e7a6650302702ae4544d1) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 7a5303a392ffa16de95f21b5d1ce4acb9a1778cf) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 61a74eaf84dad6d04986e517a71e172b53a15f80) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 8b89d8d1fea22cee5a75b15e7d931e98c9b95a04 Author: Kevin Zhu <ipandtcp@gmail.com> Date: Tue Sep 17 15:05:45 2019 +0200 BUG/MEDIUM: spoe: Use a different engine-id per process SPOE engine-id is the same for all processes when nbproc is more than 1. So, in async mode, an agent receiving a NOTIFY frame from a process may send the ACK to another process. It is abviously wrong. A different engine-id must be generated for each process. This patch must be backported to 2.0, 1.9 and 1.8. (cherry picked from commit d87b1a56d526568b55ee33b77f77c87455026ae1) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 652c8e238537373d69aff5c0608e35a9f373dd05) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit a8ec7eeca04953c274fcf187a090e316e4211465) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 0840cfc528d94e3ffe0acd497bd0bdd43dad5184 Author: Willy Tarreau <w@1wt.eu> Date: Tue Mar 5 12:04:55 2019 +0100 MINOR: tools: implement my_flsl() We already have my_ffsl() to find the lowest bit set in a word, and this patch implements the search for the highest bit set in a word. On x86 it uses the bsr instruction and on other architectures it uses an efficient implementation. (cherry picked from commit d87a67f9bc422bf41b6b81c1e99d9aebbbc18d8e) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> [Cf: This one is required to backport some patches on the SPOE. It is just a new function, so there is no impact] (cherry picked from commit 919653f0419579c825c99488adfff7ac5975dc63) Signed-off-by: Willy Tarreau <w@1wt.eu> commit e583225fa6b1eae1bbf643597a13cae2ebd9b539 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Thu Sep 26 16:38:28 2019 +0200 BUG/MAJOR: mux_h2: Don't consume more payload than received for skipped frames When a frame is received for a unknown or already closed stream, it must be skipped. This also happens when a stream error is reported. But we must be sure to only skip received data. In the loop in h2_process_demux(), when such frames are handled, all the frame lenght is systematically skipped. If the frame payload is partially received, it leaves the demux buffer in an undefined state. Because of this bug, all sort of errors may be observed, like crash or intermittent freeze. This patch must be backported to 2.0, 1.9 and 1.8. (cherry picked from commit 5112a603d9507cac84ae544863251e814e5eb8d8) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit e12a26f6b4f7d0f2cf49b24eeb2c5cb8218cc974) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 35746784c1f0306e49f793012acb045cb55df55d) [wt: adapt to pre-1.9 buffer API] Signed-off-by: Willy Tarreau <w@1wt.eu> commit c22da9a9e043ab121e21588217861c119fde9483 Author: Krisztian Kovacs <krisztian.kovacs@oneidentity.com> Date: Tue Sep 24 14:12:13 2019 +0200 BUG/MEDIUM: namespace: close open namespaces during soft shutdown When doing a soft shutdown, we won't be making new connections anymore so there's no point in keeping the namespace file descriptors open anymore. Keeping these open effectively makes it impossible to properly clean up namespaces which are no longer used in the new configuration until all previously opened connections are closed in the old worker process. This change introduces a cleanup function that is called during soft shutdown that closes all namespace file descriptors by iterating over the namespace ebtree. (cherry picked from commit 710d987cd62ab0779418f14aa2168dc10ef6bac7) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 6d215536f4aa2e3c95fde9d001a1c894d4eecb93) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 5a861225731bf181bef3e0b6b92490dd836bd52c) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 2d7bcb43f7f01131a42a446ac48e7ba24a0f200b Author: Willy Tarreau <w@1wt.eu> Date: Tue Sep 3 18:55:02 2019 +0200 BUG/MEDIUM: check/threads: make external checks run exclusively on thread 1 See GH issues #141 for all the context. In short, registered signal handlers are not inherited by other threads during startup, which is normally not a problem, except that we need that the same thread as the one doing the fork() cleans up the old process using waitpid() once its death is reported via SIGCHLD, as happens in external checks. The only simple solution to this at the moment is to make sure that external checks are exclusively run on the first thread, the one which registered the signal handlers on startup. It will be far more than enough anyway given that external checks must not require to be load balanced on multiple threads! A more complex solution could be designed over the long term to let each thread deal with all signals but it sounds overkill. This must be backported as far as 1.8. (cherry picked from commit 6dd4ac890b5810b0f0fe81725fda05ad3d052849) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit b143711afe833f9824a7372b88ef9435ff240e9a) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit c4c0057b89e0de93521a6caa5a3cd4d375357dc4) Signed-off-by: Willy Tarreau <w@1wt.eu> commit b560440ea26e44a950155d1932c1cd4b4dd7fc00 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Sep 13 09:50:15 2019 +0200 BUG/MINOR: acl: Fix memory leaks when an ACL expression is parsed This only happens during the configuration parsing. First leak is the string representing the last converter parsed, if any. The second one is on the error path, when the allocation of the ACL expression failed. In this case, the sample was not released. This patch fixes the issue #256. It must be backported to all stable versions. (cherry picked from commit 361935aa1e327d2249453eab0b8f0300683f47b2) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 6a4c746b63c89c7d4c5f21d79ceb45207ebb24bb) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit d84d945ac2a3d33044d7d56b8ec709d9e6a0aec3) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 2920d940f5eb00b14b59d14ddb7caeeb3e3ccc55 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Sep 6 15:24:55 2019 +0200 BUG/MINOR: filters: Properly set the HTTP status code on analysis error When a filter returns an error during the HTTP analysis, an error must be returned if the status code is not already set. On the request path, an error 400 is returned. On the response path, an error 502 is returned. The status is considered as unset if its value is not strictly positive. If needed, this patch may be backported to all versions having filters (as far as 1.7). Because nobody have never report any bug, the backport to 2.0 is probably enough. (cherry picked from commit e058f7359f3822beb8552f77a6d439cb053edb3f) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit eab13f042b9a98cadb215a2c29f2ee9164c18f19) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit c7db58bd3ee59153db8e9326154b6fbb1181a756) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 3bd4bbdb9f54c18856aeb66b4b9f4a698973d3d3 Author: Willy Tarreau <w@1wt.eu> Date: Thu Sep 12 14:01:40 2019 +0200 BUG/MEDIUM: http: also reject messages where "chunked" is missing from transfer-enoding Nathan Davison (@ndavison) reported that in legacy mode we don't correctly reject requests or responses featuring a transfer-encoding header missing the "chunked" value. As mandated in the protocol spec, the test verifies that "chunked" is the last one, but only does so when it is present. As such, "transfer-encoding: foobar" is not rejected, only "transfer-encoding: chunked, foobar" will be. The impact is limited, but if combined with "http-reuse always", it could be used as a help to construct a content smuggling attack against a vulnerable component employing a lenient parser which would ignore the content-length header as soon as it sees a transfer-encoding one, without even parsing it. In this case haproxy would fail to protect it. The fix consists in completing the existing checks to verify that "chunked" was present if any "transfer-encoding" header was met, otherwise either reject the request message or make the response end on a close. This fix is only for 2.0 and older versions as legacy mode was removed from 2.1. It should be backported to all maintained versions. (cherry picked from commit 196a7df44d8129d1adc795da020b722614d6a581) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 5513fcaa601dd344be548430fc1760dbedebf4f2) Signed-off-by: Willy Tarreau <w@1wt.eu> commit ba3abeda541ffe93fd528e9bc8701d4faadfb680 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Wed Sep 4 09:39:42 2019 +0200 BUG/MEDIUM: proto-http: Always start the parsing if there is no outgoing data When we are waiting for a request or a response, if the channel's buffer is not rewritable (the reservce is not fully free), nothing is done and we wait to have a rewritable buffer. It was an old implicit assumption of HTTP analyzers. On old versions, at this stage, if a buffer was not rewritable, it meant some outgoing data were pending to be sent. On recent versions, it should not happen because all outgoing data are sent before starting the analysis of the next transaction. But the applets may be lead to use the reserve. For instance, the cache applet adds the header "Age" to cached responses. It may use the reserve to do so if the size of the response headers is huge. So, in such case, the implicit assumption of a no rewritable buffer because of output data is wrong. But the message analysis remains blocked, sometime infinitely depending on circumstances. To fix the bug and to avoid any ambiguity, we now also check if there are some outgoing data when the buffer is not rewritable to postpone the message analysis. In fact, this code may probably be removed because it should never happen. But I prefer to be conservative here and don't introduce a bug because of an unknown/unexpected hidden corner case. Anyway, it is not a big deal because all legacy HTTP code is removed in the 2.1. This is a direct commit to the 2.0 branch, as the problem doesn't exist in master. It must be backported at least to 1.9 and 1.8 because of the cache. But it may be also backported to all stable versions. This patch should partly fix the github issue #233. (cherry picked from commit 3d36d4e720a76a12c7f6cd64c7971237d7d92d78) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit d09d66853a3700d2b9261c02e1027d13b4420f5b) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit ae9e97ed9d2ac46515e0fba1cb71028169cc3be6 Author: Willy Tarreau <w@1wt.eu> Date: Mon Aug 26 10:55:52 2019 +0200 BUG/MEDIUM: listener/threads: fix an AB/BA locking issue in delete_listener() The delete_listener() function takes the listener's lock before taking the proto_lock, which is contrary to what other functions do, possibly causing an AB/BA deadlock. In practice the two only places where both are taken are during protocol_enable_all() and delete_listener(), the former being used during startup and the latter during stop. In practice during reload floods, it is technically possible for a thread to be initializing the listeners while another one is stopping. While this is too hard to trigger on 2.0 and above due to the synchronization of all threads during startup, it's reasonably easy to do in 1.9 by having hundreds of listeners, starting 64 threads and flooding them with reloads like this : $ while usleep 50000; do killall -USR2 haproxy; done Usually in less than a minute, all threads will be deadlocked. The fix consists in always taking the proto_lock before the listener lock. It seems to be the only place where these two locks were reversed. This fix needs to be backported to 2.0, 1.9, and 1.8. (cherry picked from commit 6ee9f8df3bfbb811526cff3313da5758b1277bc6) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit b10c8d7641cc8ceae6fba4506b7f987d66109bd9) [wt: adjusted context] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit bf64d1021bd0db1f9892ec34473e34033cdb1dd9) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 7ae43ca14823ae61c547ac08c0a237b6ec55e04a Author: Willy Tarreau <w@1wt.eu> Date: Mon Aug 26 10:37:39 2019 +0200 BUG/MINOR: mworker: disable SIGPROF on re-exec If haproxy is built with profiling enabled with -pg, it is possible to see the master quit during a reload while it's re-executing itself with error code 155 (signal 27) saying "Profile timer expired)". This happens if the SIGPROF signal is delivered during the execve() call while the handler was already unregistered. The issue itself is not directly inside haproxy but it's easy to address. This patch disables this signal before calling execvp() during a master reload. A simple test for this consists in running this little script with haproxy started in master-worker mode : $ while usleep 50000; do killall -USR2 haproxy; done This fix should be backported to all versions using the master-worker model. (cherry picked from commit e0d86e2c1caaaa2141118e3309d479de5f67e855) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit f259fcc00a04e633a7a64f894a719f78f3644867) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit f2c9971cb51d28f0c4422d1197447406aa72e945) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 42c7b87b18bce3a12a8b1a08435e393ed543f79f Author: n9@users.noreply.github.com <n9@users.noreply.github.com> Date: Fri Aug 23 11:21:05 2019 +0200 DOC: fixed typo in management.txt replaced fot -> for added two periods (cherry picked from commit 25a1c8e4539c12c19a3fe04aabe563cdac5e36db) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 7c80af0fb53f2a1d93a597f7d97cc67996e36be2) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 4c43256c7e78643f8972f4248ed11688137609bb) Signed-off-by: Willy Tarreau <w@1wt.eu> commit dcb8c973fdfa6b96b651b06740b74b1d492cb92d Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon May 6 09:53:10 2019 +0200 BUG/MEDIUM: spoe: Be sure the sample is found before setting its context When a sample fetch is encoded, we use its context to set info about the fragmentation. But if the sample is not found, the function sample_process() returns NULL. So we me be sure the sample exists before setting its context. This patch must be backported to 1.9 and 1.8. (cherry picked from commit 3b1d004d410129efcf365643d2583dcd2cb6ed0f) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 2e062883b8f94500314b7c863c1a13e3c9af23ca) [wt: adjust buf->chunk context] Signed-off-by: Willy Tarreau <w@1wt.eu> commit b6aa92725eaf823c4b18316729eae494783daa85 Author: Olivier Houchard <ohouchard@haproxy.com> Date: Mon May 6 18:58:48 2019 +0200 MINOR: doc: Document allow-0rtt on the server line. Briefly document allow-0rtt on the server line, and only the part that apply to 1.8 and 1.9. This should be backported to 1.8 and 1.9. (cherry picked from commit 8cb2d2e94199b8a6a9186ec12ee8146421a5d227) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 895b6a4568287b87d69599f347da01dcd1cfc9b2) [wt: context] Signed-off-by: Willy Tarreau <w@1wt.eu> commit dc90debd638a2aa94e062e66c00b1b8a9ab3c115 Author: Willy Tarreau <w@1wt.eu> Date: Sun May 5 10:11:39 2019 +0200 BUG/MINOR: logs/threads: properly split the log area upon startup If logs were emitted before creating the threads, then the dataptr pointer keeps a copy of the end of the log header. Then after the threads are created, the headers are reallocated for each thread. However the end pointer was not reset until the end of the first second, which may result in logs emitted by multiple threads during the first second to be mangled, or possibly in some cases to use a memory area that was reused for something else. The fix simply consists in reinitializing the end pointers immediately when the threads are created. This fix must be backported to 1.9 and 1.8. (cherry picked from commit 55e2f5ad14a6d9ec39c218296ad3f1a521cc74a1) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 55c3bd480fbbbb4692368655d3d4a425b5248e2a) [wt: ctx, buf->chunk] Signed-off-by: Willy Tarreau <w@1wt.eu> commit ae6824e2c836f1714827e9d3f585e729ea022f30 Author: Willy Tarreau <w@1wt.eu> Date: Sun May 5 06:54:22 2019 +0200 BUG/MEDIUM: checks: make sure the warmup task takes the server lock The server warmup task is used when a server uses the "slowstart" parameter. This task affects the server's weight and maxconn, and may dequeue pending connections from the queue. This must be done under the server's lock, which was not the case. This must be backported to 1.9 and 1.8. (cherry picked from commit 4fc49a9aabacc8028877e2dcbdb54d8a19c398c4) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 207ba5a6bc1c03f2ba15ac3cd49bfa756fb760bb) Signed-off-by: Willy Tarreau <w@1wt.eu> commit ad838cae47c15dc0be018be6c081e241d41ed45f Author: Olivier Houchard <ohouchard@haproxy.com> Date: Fri May 3 20:56:19 2019 +0200 BUG/MEDIUM: ssl: Use the early_data API the right way. We can only read early data if we're a server, and write if we're a client, so don't attempt to mix both. This should be backported to 1.8 and 1.9. (cherry picked from commit 010941f87605e8219d25becdbc652350a687d6a2) [wt: minor context adjustments due to latest SSL API changes in 2.0] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 3d14cbddd971f8f301f795c8446ae2bcadab6cc2) Signed-off-by: Willy Tarreau <w@1wt.eu> commit a196f480348402a263aa65eed55261e9a59d2da7 Author: Willy Tarreau <w@1wt.eu> Date: Thu Sep 6 14:52:21 2018 +0200 MINOR: connection: add new function conn_is_back() This function returns true if the connection is a backend connection and false if it's a frontend connection. (cherry picked from commit 57f8185625f967f868187d336f995fac28f83fc5) [wt: backported since used by next commit] Signed-off-by: Willy Tarreau <w@1wt.eu> commit ccb3136727d1fd5efccd4689199aa29f530f6ed0 Author: Dragan Dosen <ddosen@haproxy.com> Date: Tue Apr 30 00:38:36 2019 +0200 BUG/MINOR: haproxy: fix rule->file memory leak When using the "use_backend" configuration directive, the configuration file name stored as rule->file was not freed in some situations. This was introduced in commit 4ed1c95 ("MINOR: http/conf: store the use_backend configuration file and line for logs"). This patch should be backported to 1.9, 1.8 and 1.7. (cherry picked from commit 2a7c20f602e5d40e9f23c703fbcb12e3af762337) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 60277d1a38b45b014478d33627a9bbb99cc9ee9e) Signed-off-by: Willy Tarreau <w@1wt.eu> commit db95dd53f88bb15e288b553de5c6687260756f03 Author: Willy Tarreau <w@1wt.eu> Date: Tue Feb 12 10:59:32 2019 +0100 BUILD/MINOR: stream: avoid a build warning with threads disabled gcc 6+ complains about a possible null-deref here due to the test in objt_server() : if (objt_server(s->target)) HA_ATOMIC_ADD(&objt_server(s->target)->counters.retries, 1); Let's simply change it to __objt_server(). This can be backported to 1.9 and 1.8. (cherry picked from commit 1ef724e2169eaff7f0272278c3fba9b34d5c7f78) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 3d3b67f1877718abbbc8cc500aae373640e454e9) Signed-off-by: Willy Tarreau <w@1wt.eu>
jiangwenyuan
pushed a commit
that referenced
this issue
Feb 13, 2020
Squashed commit of the following: commit dcfaeb3bff8d7ca4de3da1eb5024ee5e8789691f Author: Willy Tarreau <w@1wt.eu> Date: Thu Oct 24 18:41:51 2019 +0200 [RELEASE] Released version 1.9.12 Released version 1.9.12 with the following main changes : - BUG/MINOR: lua: Properly initialize the buffer's fields for string samples in hlua_lua2(smp|arg) - MINOR: mux-h2: add a per-connection list of blocked streams - BUILD: ebtree: make eb_is_empty() and eb_is_dup() take a const - BUG/MEDIUM: mux-h2: do not enforce timeout on long connections - BUG/MEDIUM: cache: make sure not to cache requests with absolute-uri - DOC: clarify some points around http-send-name-header's behavior - MINOR: stats: mention in the help message support for "json" and "typed" - BUG/MINOR: ssl: abort on sni allocation failure - BUG/MINOR: ssl: free the sni_keytype nodes - BUG/MINOR: ssl: abort on sni_keytypes allocation failure - BUILD: ssl: wrong #ifdef for SSL engines code - BUG/MEDIUM: htx: Catch chunk_memcat() failures when HTX data are formatted to h1 - BUG/MINOR: chunk: Fix tests on the chunk size in functions copying data - BUG/MINOR: mux-h1: Mark the output buffer as full when the xfer is interrupted - BUG/MINOR: mux-h1: Capture ignored parsing errors - BUG/MINOR: WURFL: fix send_log() function arguments - BUG/MINOR: http-htx: Properly set htx flags on error files to support keep-alive - BUG/MINOR: mworker/ssl: close openssl FDs unconditionally - BUG/MINOR: tcp: Don't alter counters returned by tcp info fetchers - BUG/MEDIUM: mux_pt: Make sure we don't have a conn_stream before freeing. - BUG/MAJOR: idle conns: schedule the cleanup task on the correct threads - Revert e8826ded5fea3593d89da2be5c2d81c522070995. - BUG/MEDIUM: mux_pt: Don't destroy the connection if we have a stream attached. - BUG/MEDIUM: mux_pt: Only call the wake emthod if nobody subscribed to receive. - CLEANUP: ssl: make ssl_sock_load_cert*() return real error codes - CLEANUP: ssl: make ssl_sock_put_ckch_into_ctx handle errcode/warn - CLEANUP: ssl: make ssl_sock_load_dh_params handle errcode/warn - CLEANUP: bind: handle warning label on bind keywords parsing. - BUG/MEDIUM: ssl: 'tune.ssl.default-dh-param' value ignored with openssl > 1.1.1 - BUG/MINOR: mworker/cli: reload fail with inherited FD - BUG/MINOR: ssl: Fix fd leak on error path when a TLS ticket keys file is parsed - BUG/MINOR: stick-table: Never exceed (MAX_SESS_STKCTR-1) when fetching a stkctr - BUG/MINOR: cache: alloc shctx after check config - BUG/MINOR: sample: Make the `field` converter compatible with `-m found` - BUG/MAJOR: mux-h2: fix incorrect backport of connection timeout fix - BUG/MINOR: mux-h2: also make sure blocked legacy connections may expire - BUG/MINOR: ssl: fix memcpy overlap without consequences. - BUG/MINOR: stick-table: fix an incorrect 32 to 64 bit key conversion - BUG/MEDIUM: pattern: make the pattern LRU cache thread-local and lockless commit d8979fee8d49b444c736051fe759f8ae3aa2c997 Author: Willy Tarreau <w@1wt.eu> Date: Wed Oct 23 06:59:31 2019 +0200 BUG/MEDIUM: pattern: make the pattern LRU cache thread-local and lockless As reported in issue #335, a lot of contention happens on the PATLRU lock when performing expensive regex lookups. This is absurd since the purpose of the LRU cache was to have a fast cache for expressions, thus the cache must not be shared between threads and must remain lockless. This commit makes the LRU cache thread-local and gets rid of the PATLRU lock. A test with 7 threads on 4 cores climbed from 67kH/s to 369kH/s, or a scalability factor of 5.5. Given the huge performance difference and the regression caused to users migrating from processes to threads, this should be backported at least to 2.0. Thanks to Brian Diekelman for his detailed report about this regression. (cherry picked from commit 403bfbb130f9fb31e52d441ebc1f8227f6883c22) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 7fdd81c43fd75349d4496649d2176ad258e55a4b) [wt: s/REGISTER_PER_THREAD_ALLOC/REGISTER_PER_THREAD_INIT/] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 5c779f766524d5d6a754710a5a8a4d3d3013d2e6 Author: Willy Tarreau <w@1wt.eu> Date: Wed Oct 23 06:21:05 2019 +0200 BUG/MINOR: stick-table: fix an incorrect 32 to 64 bit key conversion As reported in issue #331, the code used to cast a 32-bit to a 64-bit stick-table key is wrong. It only copies the 32 lower bits in place on little endian machines or overwrites the 32 higher ones on big endian machines. It ought to simply remove the wrong cast dereference. This bug was introduced when changing stick table keys to samples in 1.6-dev4 by commit bc8c404449 ("MAJOR: stick-tables: use sample types in place of dedicated types") so it the fix must be backported as far as 1.6. (cherry picked from commit 28c63c15f572a1afeabfdada6a0a4f4d023d05fc) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 6fe22ed08a642d27f1a228c6f3b7f9f0dd0ea4cd) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 675702e22476e63495a1087f8303e889c1ab47a2 Author: Emeric Brun <ebrun@haproxy.com> Date: Tue Oct 8 18:27:37 2019 +0200 BUG/MINOR: ssl: fix memcpy overlap without consequences. A trick is used to set SESSION_ID, and SESSION_ID_CONTEXT lengths to 0 and avoid ASN1 encoding of these values. There is no specific function to set the length of those parameters to 0 so we fake this calling these function to a different value with the same buffer but a length to zero. But those functions don't seem to check the length of zero before performing a memcpy of length zero but with src and dst buf on the same pointer, causing valgrind to bark. So the code was re-work to pass them different pointers even if buffer content is un-used. In a second time, reseting value, a memcpy overlap happened on the SESSION_ID_CONTEXT. It was re-worked and this is now reset using the constant global value SHCTX_APPNAME which is a different pointer with the same content. This patch should be backported in every version since ssl support was added to haproxy if we want valgrind to shut up. This is tracked in github issue #56. (cherry picked from commit eb46965bbb21291aab75ae88f033d9c9bab4a785) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 7b34de3f4ccb3db391a416ef1796cc0a35b11712) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 9433a5e2a6e84bef7dfbbe54d19213a251e5fc5c Author: Willy Tarreau <w@1wt.eu> Date: Tue Oct 22 10:04:39 2019 +0200 BUG/MINOR: mux-h2: also make sure blocked legacy connections may expire The backport of commit 2dcdc2236 ("MINOR: mux-h2: add a per-connection list of blocked streams") missed one addition of LIST_ADDQ(blocked_list) for the legacy version. This makes the stream not be counted as blocked and will not let the connection expire in this specific case. This fix is specific to 2.0 and must be backported to 1.9 as well. (cherry picked from commit 55dc0842fc105eb87c5d1dae68a6c613396e2103) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 66112ebface495cf645fc713aff57bbef292ee7c Author: Willy Tarreau <w@1wt.eu> Date: Tue Oct 22 09:56:28 2019 +0200 BUG/MAJOR: mux-h2: fix incorrect backport of connection timeout fix In commit c2ea47f ("BUG/MEDIUM: mux-h2: do not enforce timeout on long connections") the changes were applied to the proper location but while this code was moved before the task_destroy() call in 2.0, in 1.9 it is after task_free() so we can return a task that was just freed, in case the timeout strikes and at the same time a new stream arrives. The probability that this issue occurs is rather low but its consequences are high and this could be the cause of issue #329. This patch is specific to 1.9 as the issue does not exist in newer versions. If the commit above is ever backported to 1.8, this one will need to as well. commit d513ff45727e0f1ad1b768723d073ed1b19acd86 Author: Tim Duesterhus <tim@bastelstu.be> Date: Wed Oct 16 15:11:15 2019 +0200 BUG/MINOR: sample: Make the `field` converter compatible with `-m found` Previously an expression like: path,field(2,/) -m found always returned `true`. Bug exists since the `field` converter exists. That is: f399b0debfc6c7dc17c6ad503885c911493add56 The fix should be backported to 1.6+. (cherry picked from commit 4381d26edc03faa46401eb0fe82fd7be84be14fd) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 4fa9857b3dc57703c99982a140df5d8119351262) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit f7df6b373d9030af6fbfd6ca2aa035f7f1d0d0b5 Author: William Lallemand <wlallemand@haproxy.com> Date: Wed Aug 28 15:22:49 2019 +0200 BUG/MINOR: cache: alloc shctx after check config When running haproxy -c, the cache parser is trying to allocate the size of the cache. This can be a problem in an environment where the RAM is limited. This patch moves the cache allocation in the post_check callback which is not executed during a -c. This patch may be backported at least to 2.0 and 1.9. In 1.9, the callbacks registration mechanism is not the same. So the patch will have to be adapted. No need to backport it to 1.8, the code is probably too different. (cherry picked from commit d1d1e229453a492a538245f6a72ba6929eca9de1) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit e4876e2b03930a5e280c77f9ebd59f861080a3c5) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit e5c0021542f17019148f0abb4a93b254f86226f3 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon Oct 21 10:53:34 2019 +0200 BUG/MINOR: stick-table: Never exceed (MAX_SESS_STKCTR-1) when fetching a stkctr When a stick counter is fetched, it is important that the requested counter does not exceed (MAX_SESS_STKCTR -1). Actually, there is no bug with a default build because, by construction, MAX_SESS_STKCTR is defined to 3 and we know that we never exceed the max value. scN_* sample fetches are numbered from 0 to 2. For other sample fetches, the value is tested. But there is a bug if MAX_SESS_STKCTR is set to a lower value. For instance 1. In this case the counters sc1_* and sc2_* may be undefined. This patch fixes the issue #330. It must be backported as far as 1.7. (cherry picked from commit a9fa88a1eac9bd0ad2cfb761c4b69fd500a1b056) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 33c7e12479cb9bdc2e7e3783fda78a1b2c242363) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit dd68cbd4797ef27040d0fae5db6e7a7da8ca0952 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon Oct 21 09:55:49 2019 +0200 BUG/MINOR: ssl: Fix fd leak on error path when a TLS ticket keys file is parsed When an error occurred in the function bind_parse_tls_ticket_keys(), during the configuration parsing, the opened file is not always closed. To fix the bug, all errors are catched at the same place, where all ressources are released. This patch fixes the bug #325. It must be backported as far as 1.7. (cherry picked from commit e566f3db11e781572382e9bfff088a26dcdb75c5) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 2bbc80ded1bc90dbf406e255917a1aa59c52902c) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 595ec6f7da17b0cf2f4ebc5aa9df0370f4011b1c Author: William Lallemand <wlallemand@haproxy.com> Date: Fri Oct 18 21:16:39 2019 +0200 BUG/MINOR: mworker/cli: reload fail with inherited FD When using the master CLI with 'fd@', during a reload, the master CLI proxy is stopped. Unfortunately if this is an inherited FD it is closed too, and the master CLI won't be able to bind again during the re-execution. It lead the master to fallback in waitpid mode. This patch forbids the inherited FDs in the master's listeners to be closed during a proxy_stop(). This patch is mandatory to use the -W option in VTest versions that contain the -mcli feature. (https://github.com/vtest/VTest/commit/86e65f1024453b1074d239a88330b5150d3e44bb) Should be backported as far as 1.9. (cherry picked from commit f7f488d8e9740d64cf82b7ef41e55d4f36fe1a43) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 79eeb2bafdff3fd6a197be99c49202c22ddeca35) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 89e61b9ad4db1097a3d53b83d914f6b0b8edc4ee Author: Emeric Brun <ebrun@haproxy.com> Date: Thu Oct 17 14:53:03 2019 +0200 BUG/MEDIUM: ssl: 'tune.ssl.default-dh-param' value ignored with openssl > 1.1.1 If openssl 1.1.1 is used, c2aae74f0 commit mistakenly enables DH automatic feature from openssl instead of ECDH automatic feature. There is no impact for the ECDH one because the feature is always enabled for that version. But doing this, the 'tune.ssl.default-dh-param' was completely ignored for DH parameters. This patch fix the bug calling 'SSL_CTX_set_ecdh_auto' instead of 'SSL_CTX_set_dh_auto'. Currently some users may use a 2048 DH bits parameter, thinking they're using a 1024 bits one. Doing this, they may experience performance issue on light hardware. This patch warns the user if haproxy fails to configure the given DH parameter. In this case and if openssl version is > 1.1.0, haproxy will let openssl to automatically choose a default DH parameter. For other openssl versions, the DH ciphers won't be usable. A commonly case of failure is due to the security level of openssl.cnf which could refuse a 1024 bits DH parameter for a 2048 bits key: $ cat /etc/ssl/openssl.cnf ... [system_default_sect] MinProtocol = TLSv1 CipherString = DEFAULT@SECLEVEL=2 This should be backport into any branch containing the commit c2aae74f0. It requires all or part of the previous CLEANUP series. This addresses github issue #324. (cherry picked from commit 6624a90a9ac2edb947a8c70fa6a8a283449750c6) Signed-off-by: Emeric Brun <ebrun@haproxy.com> (cherry picked from commit d6de151248603b357565ae52fe92440e66c1177c) Signed-off-by: Willy Tarreau <w@1wt.eu> commit c6a03f45f3ba27a47e7a7a62d6e40ee269e0f50d Author: Emeric Brun <ebrun@haproxy.com> Date: Thu Oct 17 16:45:56 2019 +0200 CLEANUP: bind: handle warning label on bind keywords parsing. All bind keyword parsing message were show as alerts. With this patch if the message is flagged only with ERR_WARN and not ERR_ALERT it will show a label [WARNING] and not [ALERT]. (cherry picked from commit 0655c9b22213a0f5716183106d86a995e672d19b) Signed-off-by: Emeric Brun <ebrun@haproxy.com> (cherry picked from commit df6dd890fd1167446326e99a816b9ba7ac86329f) Signed-off-by: Willy Tarreau <w@1wt.eu> commit e740508a0dd5e30433a86333b874a0833f989e17 Author: Emeric Brun <ebrun@haproxy.com> Date: Thu Oct 17 13:27:40 2019 +0200 CLEANUP: ssl: make ssl_sock_load_dh_params handle errcode/warn ssl_sock_load_dh_params used to return >0 or -1 to indicate success or failure. Make it return a set of ERR_* instead so that its callers can transparently report its status. Given that its callers only used to know about ERR_ALERT | ERR_FATAL, this is the only code returned for now. An error message was added in the case of failure and the comment was updated. (cherry picked from commit 7a88336cf83cd1592fb8e7bc456d72c00c2934e4) Signed-off-by: Emeric Brun <ebrun@haproxy.com> (cherry picked from commit cfc1afe9f21ec27612ed4ad84c4a066c68ca24af) Signed-off-by: Willy Tarreau <w@1wt.eu> commit ff4b8fd98b5763ba73b7bf1abb6fd9f8429fb2cc Author: Emeric Brun <ebrun@haproxy.com> Date: Thu Oct 17 13:25:14 2019 +0200 CLEANUP: ssl: make ssl_sock_put_ckch_into_ctx handle errcode/warn ssl_sock_put_ckch_into_ctx used to return 0 or >0 to indicate success or failure. Make it return a set of ERR_* instead so that its callers can transparently report its status. Given that its callers only used to know about ERR_ALERT | ERR_FATAL, this is the only code returned for now. And a comment was updated. (cherry picked from commit a96b582d0eaf1a7a9b21c71b8eda2965f74699d4) Signed-off-by: Emeric Brun <ebrun@haproxy.com> (cherry picked from commit 394701dc80ac9d429b12d405973fb30c348b81f3) Signed-off-by: Willy Tarreau <w@1wt.eu> commit d0dd39c5166a40430b8fe0e1645ccb9f54560fa6 Author: Willy Tarreau <w@1wt.eu> Date: Wed Oct 16 16:42:19 2019 +0200 CLEANUP: ssl: make ssl_sock_load_cert*() return real error codes These functions were returning only 0 or 1 to mention success or error, and made it impossible to return a warning. Let's make them return error codes from ERR_* and map all errors to ERR_ALERT|ERR_FATAL for now since this is the only code that was set on non-zero return value. In addition some missing comments were added or adjusted around the functions' return values. (cherry picked from commit bbc91965bf4bc7e08c5a9b93fdfa28a64c0949d3) [EBR: also include a part of 054563de1] Signed-off-by: Emeric Brun <ebrun@haproxy.com> (cherry picked from commit b131c870f9fbb5553d8970bb039609f97e1cc6e6) Signed-off-by: Willy Tarreau <w@1wt.eu> commit c5a267f9c5cc3df47272b682296df1c2cf1de65f Author: Olivier Houchard <cognet@ci0.org> Date: Fri Oct 18 14:18:29 2019 +0200 BUG/MEDIUM: mux_pt: Only call the wake emthod if nobody subscribed to receive. In mux_pt_io_cb(), instead of always calling the wake method, only do so if nobody subscribed for receive. If we have a subscription, just wake the associated tasklet up. This should be backported to 1.9 and 2.0. (cherry picked from commit 2ed389dc6e27257997f83e3f22cb6bf8898a2a5a) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit aafb6cc6563bd3b0eaefd02b42c7ad844e3d867e) [wt: s/>task/>tasklet] Signed-off-by: Willy Tarreau <w@1wt.eu> commit f40492791d38ae6216c45e845d1dd4f19b8936a2 Author: Olivier Houchard <cognet@ci0.org> Date: Fri Oct 18 13:56:40 2019 +0200 BUG/MEDIUM: mux_pt: Don't destroy the connection if we have a stream attached. There's a small window where the mux_pt tasklet may be woken up, and thus mux_pt_io_cb() get scheduled, and then the connection is attached to a new stream. If this happen, don't do anything, and just let the stream know by calling its wake method. If the connection had an error, the stream should take care of destroying it by calling the detach method. This should be backported to 2.0 and 1.9. (cherry picked from commit ea510fc5e7cf8ead040253869160b0d2266ce65f) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit a5115f2a4cb6ff11198dc8a5c598b3d75562f751) Signed-off-by: Willy Tarreau <w@1wt.eu> commit b94c472302ca4b6cb6da475e9873e77666dbfb9a Author: Olivier Houchard <cognet@ci0.org> Date: Fri Oct 18 10:59:30 2019 +0200 Revert e8826ded5fea3593d89da2be5c2d81c522070995. This reverts commit "BUG/MEDIUM: mux_pt: Make sure we don't have a conn_stream before freeing.". mux_pt_io_cb() is only used if we have no associated stream, so we will never have a cs, so there's no need to check that, and we of course have to destroy the mux in mux_pt_detach() if we have no associated session, or if there's an error on the connection. This should be backported to 2.0 and 1.9. (cherry picked from commit 9dce2c53a8e49d43b501c3025d41705d302b1df1) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 6d206de892aba6449bc63f3c74e784d5f45722c9) [wt: adjusted context] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 27260cec163abc4bafefb65af92698f597cf8b5e Author: Willy Tarreau <w@1wt.eu> Date: Fri Oct 18 08:50:49 2019 +0200 BUG/MAJOR: idle conns: schedule the cleanup task on the correct threads The idle cleanup tasks' masks are wrong for threads 32 to 64, which causes the wrong thread to wake up and clean the connections that it does not own, with a risk of crash or infinite loop depending on concurrent accesses. For thread 32, any thread between 32 and 64 will be woken up, but for threads 33 to 64, in fact threads 1 to 32 will run the task instead. This issue only affects deployments enabling more than 32 threads. While is it not common in 1.9 where this has to be explicit, and can easily be dealt with by lowering the number of threads, it can be more common in 2.0 since by default the thread count is determined based on the number of available processors, hence the MAJOR tag which is mostly relevant to 2.x. The problem was first introduced into 1.9-dev9 by commit 0c18a6fe3 ("MEDIUM: servers: Add a way to keep idle connections alive.") and was later moved to cfgparse.c by commit 980855bd9 ("BUG/MEDIUM: server: initialize the orphaned conns lists and tasks at the end"). This patch needs to be backported as far as 1.9, with care as 1.9 is slightly different there (uses idle_task[] instead of idle_conn_cleanup[] like in 2.x). (cherry picked from commit bbb5f1d6d2a9948409683aa5865c130801d193ad) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 5fa7e736c3c35819fed7cfb4ddb4609a7d352b3b) [wt: applied to idle_task[] instead of idle_conn_cleanup[]) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 1333ecf29b938401a7ce6ba644e822650c50c625 Author: Olivier Houchard <ohouchard@haproxy.com> Date: Thu Oct 17 18:02:53 2019 +0200 BUG/MEDIUM: mux_pt: Make sure we don't have a conn_stream before freeing. On error, make sure we don't have a conn_stream before freeing the connection and the associated mux context. Otherwise a stream will still reference the connection, and attempt to use it. If we still have a conn_stream, it will properly be free'd when the detach method is called, anyway. This should be backported to 2.0 and 1.9. (cherry picked from commit e8826ded5fea3593d89da2be5c2d81c522070995) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 5cc02aa4d9ae13f1b6833dcb5cd1c30d7f9d524d) [wt: adjusted context] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 1226abb019fb4e925aa387710ba125a8e2e18d35 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Thu Oct 17 14:40:48 2019 +0200 BUG/MINOR: tcp: Don't alter counters returned by tcp info fetchers There are 2 kinds of tcp info fetchers. Those returning a time value (fc_rtt and fc_rttval) and those returning a counter (fc_unacked, fc_sacked, fc_retrans, fc_fackets, fc_lost, fc_reordering). Because of a bug, the counters were handled as time values, and by default, were divided by 1000 (because of an invalid conversion from us to ms). To work around this bug and have the right value, the argument "us" had to be specified. So now, tcp info fetchers returning a counter don't support any argument anymore. To not break old configurations, if an argument is provided, it is ignored and a warning is emitted during the configuration parsing. In addition, parameter validiation is now performed during the configuration parsing. This patch must be backported as far as 1.7. (cherry picked from commit ba0c53ef71cd7d2b344de318742d0ef239fd34e4) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 297df1860c6d09c7edde1dd6b0bd4f9758600ff3) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit faa668d2db81cc1aab4f7c4c2a81642991acd9e3 Author: William Lallemand <wlallemand@haproxy.com> Date: Tue Oct 15 14:04:08 2019 +0200 BUG/MINOR: mworker/ssl: close openssl FDs unconditionally Patch 56996da ("BUG/MINOR: mworker/ssl: close OpenSSL FDs on reload") fixes a issue where the /dev/random FD was leaked by OpenSSL upon a reload in master worker mode. Indeed the FD was not flagged with CLOEXEC. The fix was checking if ssl_used_frontend or ssl_used_backend were set to close the FD. This is wrong, indeed the lua init code creates an SSL server without increasing the backend value, so the deinit is never done when you don't use SSL in your configuration. To reproduce the problem you just need to build haproxy with openssl and lua with an openssl which does not use the getrandom() syscall. No openssl nor lua configuration are required for haproxy. This patch must be backported as far as 1.8. Fix issue #314. (cherry picked from commit 5fdb5b36e1e0bef9b8a79c3550bd7a8751bac396) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 1b7c4dc3fc509a40debbf4ffa6342f56c7046e83) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 8d257f1dcf78f1e397133829508fb208e8061386 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Wed Oct 16 09:09:04 2019 +0200 BUG/MINOR: http-htx: Properly set htx flags on error files to support keep-alive When an error file was loaded, the flag HTX_SL_F_XFER_LEN was never set on the HTX start line because of a bug. During the headers parsing, the flag H1_MF_XFER_LEN is never set on the h1m. But it was the condition to set HTX_SL_F_XFER_LEN on the HTX start-line. Instead, we must only rely on the flags H1_MF_CLEN or H1_MF_CHNK. Because of this bug, it was impossible to keep a connection alive for a response generated by HAProxy. Now the flag HTX_SL_F_XFER_LEN is set when an error file have a content length (chunked responses are unsupported at this stage) and the connection may be kept alive if there is no connection header specified to explicitly close it. This patch must be backported to 2.0 and 1.9. (cherry picked from commit 0d4ce93fcf9bd1f350c95f5a1bbe403bce57c680) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 046a2e27886fd52f969c04582aab4931c34a48c3) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 9a5cba43d2da5cb80791868e74eed8b0b9f11148 Author: Miroslav Zagorac <mzagorac@haproxy.com> Date: Mon Oct 14 17:15:56 2019 +0200 BUG/MINOR: WURFL: fix send_log() function arguments If the user agent data contains text that has special characters that are used to format the output from the vfprintf() function, haproxy crashes. String "%s %s %s" may be used as an example. % curl -A "%s %s %s" localhost:10080/index.html curl: (52) Empty reply from server haproxy log: 00000000:WURFL-test.clireq[00c7:ffffffff]: GET /index.html HTTP/1.1 00000000:WURFL-test.clihdr[00c7:ffffffff]: host: localhost:10080 00000000:WURFL-test.clihdr[00c7:ffffffff]: user-agent: %s %s %s 00000000:WURFL-test.clihdr[00c7:ffffffff]: accept: */* segmentation fault (core dumped) gdb 'where' output: #0 strlen () at ../sysdeps/x86_64/strlen.S:106 #1 0x00007f7c014a8da8 in _IO_vfprintf_internal (s=s@entry=0x7ffc808fe750, format=<optimized out>, format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n", ap=ap@entry=0x7ffc808fe8b8) at vfprintf.c:1637 #2 0x00007f7c014cfe89 in _IO_vsnprintf ( string=0x55cb772c34e0 "WURFL: retrieve header request returns [(null) %s %s %s B,w\313U", maxlen=<optimized out>, format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n", args=args@entry=0x7ffc808fe8b8) at vsnprintf.c:114 #3 0x000055cb758f898f in send_log (p=p@entry=0x0, level=level@entry=5, format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n") at src/log.c:1477 #4 0x000055cb75845e0b in ha_wurfl_log ( message=message@entry=0x55cb75989460 "WURFL: retrieve header request returns [%s]\n") at src/wurfl.c:47 #5 0x000055cb7584614a in ha_wurfl_retrieve_header (header_name=<optimized out>, wh=0x7ffc808fec70) at src/wurfl.c:763 In case WURFL (actually HAProxy) is not compiled with debug option enabled (-DWURFL_DEBUG), this bug does not come to light. This patch could be backported in every version supporting the ScientiaMobile's WURFL. (as far as 1.7) (cherry picked from commit f0eb3739ac5460016455cd606d856e7bd2b142fb) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 5e1c1468789d80213325054b6fc1dbd1c70d7776) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 1ed7ca9a233fa8fb616be0cc74c397fc04918a13 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Oct 11 14:22:00 2019 +0200 BUG/MINOR: mux-h1: Capture ignored parsing errors When the option "accept-invalid-http-request" is enabled, some parsing errors are ignored. But the position of the error is reported. In legacy HTTP mode, such errors were captured. So, we now do the same in the H1 multiplexer. If required, this patch may be backported to 2.0 and 1.9. (cherry picked from commit 486498c630a0678446808107d02f94c48fc6722a) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit b4bad50d04cd3156d6ba11a7bcf60def96b70b2c) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 8bc10ec318788fd380787438ab5816c0ee6262ca Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon Oct 14 14:17:00 2019 +0200 BUG/MINOR: mux-h1: Mark the output buffer as full when the xfer is interrupted When an outgoing HTX message is formatted to a raw message, if we fail to copy data of an HTX block into the output buffer, we mark it as full. Before it was only done calling the function buf_room_for_htx_data(). But this function is designed to optimize input processing. This patch must be backported to 2.0 and 1.9. (cherry picked from commit a61aa544b4b95d1416fe5684ca2d3a0e110e9743) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 85fc6ef50dd0d2404c0c43a5671c949371f36ee9) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 882fc574d7d7c468afb2d575501fb6132f4e0193 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon Oct 14 11:29:48 2019 +0200 BUG/MINOR: chunk: Fix tests on the chunk size in functions copying data When raw data are copied or appended in a chunk, the result must not exceed the chunk size but it can reach it. Unlike functions to copy or append a string, there is no terminating null byte. This patch must be backported as far as 1.8. Note in 1.8, the functions chunk_cpy() and chunk_cat() don't exist. (cherry picked from commit 48fa033f2809af265c230a7c7cf86413b7f9909b) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit da0889df9f33c2a8585e95c95db0f81a80dcc40c) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 95525e773bc0bed68792df101936f21e98c9ab47 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon Oct 14 14:36:51 2019 +0200 BUG/MEDIUM: htx: Catch chunk_memcat() failures when HTX data are formatted to h1 In functions htx_*_to_h1(), most of time several calls to chunk_memcat() are chained. The expected size is always compared to available room in the buffer to be sure the full copy will succeed. But it is a bit risky because it relies on the fact the function chunk_memcat() evaluates the available room in the buffer in a same way than htx ones. And, unfortunately, it does not. A bug in chunk_memcat() will always leave a byte unused in the buffer. So, for instance, when a chunk is copied in an almost full buffer, the last CRLF may be skipped. To fix the issue, we now rely on the result of chunk_memcat() only. This patch must be backported to 2.0 and 1.9. (cherry picked from commit e0f8dc576f62ace9ad1055ca068ab5d4f3a952aa) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 3cc7647b31a7ec4373ac5023fff93da41ece117b) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 3221ec7f5ad063cb0c91d7268e5d4b5d7eedaf2e Author: William Lallemand <wlallemand@haproxy.com> Date: Mon Oct 14 14:14:59 2019 +0200 BUILD: ssl: wrong #ifdef for SSL engines code The SSL engines code was written below the OCSP #ifdef, which means you can't build the engines code if the OCSP is deactived in the SSL lib. Could be backported in every version since 1.8. (cherry picked from commit 104a7a6c14fb30b7d44a28739ed83b43622e161e) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit c33d83d7d32f299e855b444d2a481d26356b3191) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit e697d6d366ee13b7645c0886c97d912242bff87d Author: William Lallemand <wlallemand@haproxy.com> Date: Fri Oct 4 17:36:55 2019 +0200 BUG/MINOR: ssl: abort on sni_keytypes allocation failure The ssl_sock_populate_sni_keytypes_hplr() function does not return an error upon an allocation failure. The process would probably crash during the configuration parsing if the allocation fail since it tries to copy some data in the allocated memory. This patch could be backported as far as 1.5. (cherry picked from commit 28a8fce485a94b636f6905134509c1150690b60f) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 4801c70bead696bed077fd71a55f6ff35bb6f9f5) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit eb5e2b9be62454d927532a135b0519d5e232edd4 Author: William Lallemand <wlallemand@haproxy.com> Date: Fri Oct 4 17:24:39 2019 +0200 BUG/MINOR: ssl: free the sni_keytype nodes This patch frees the sni_keytype nodes once the sni_ctxs have been allocated in ssl_sock_load_multi_ckchn(); Could be backported in every version using the multi-cert SSL bundles. (cherry picked from commit 8ed5b965872e3b0cd6605f37bc8fe9f2819ce03c) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit e92c030dfcb298daa7175a789a2fdea42a4784c5) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 63e63cb341cfb7607c83a085700ac9bf8fe14a3a Author: William Lallemand <wlallemand@haproxy.com> Date: Thu Oct 3 23:46:33 2019 +0200 BUG/MINOR: ssl: abort on sni allocation failure The ssl_sock_add_cert_sni() function never return an error when a sni_ctx allocation fail. It silently ignores the problem and continues to try to allocate other snis. It is unlikely that a sni allocation will succeed after one failure and start a configuration without all the snis. But to avoid any problem we return a -1 upon an sni allocation error and stop the configuration parsing. This patch must be backported in every version supporting the crt-list sni filters. (as far as 1.5) (cherry picked from commit fe49bb3d0c046628d67d57da15a7034cc2230432) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> [Cf: slightly adapted for 2.0] (cherry picked from commit 24e292c1054616e06b1025441ed7a0a59171d108) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit df7f897ba901686ca0f5833ce39dab498823615b Author: Willy Tarreau <w@1wt.eu> Date: Wed Oct 9 07:19:02 2019 +0200 MINOR: stats: mention in the help message support for "json" and "typed" Both "show info" and "show stat" support the "typed" output format and the "json" output format. I just never can remind them, which is an indication that some help is missing. (cherry picked from commit 6103836315fd31418e1a09e820dfaf1cdd0abd98) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit e9038438c813a6ff028ded6c4250fd2269c1a215) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 4e2f7e359458e017cd724b558540d476339f07ed Author: Willy Tarreau <w@1wt.eu> Date: Mon Oct 7 14:58:02 2019 +0200 DOC: clarify some points around http-send-name-header's behavior The directive causes existing an header to be removed, which is not explicitly mentioned though already being relied on, and also mention the fast that it should not be used to modify transport level headers and that doing it on Host is more than border-line and definitely not a supported long-term option eventhough it currently works. (cherry picked from commit 81bef7e89993c963d934eab21cca744cb6a6cb03) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit e0e32794c781c4b0798ee7d9887e5d6c5d8b6405) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 436d6d8815b15def141270af3de1fbf1c175d803 Author: Willy Tarreau <w@1wt.eu> Date: Mon Oct 7 14:06:34 2019 +0200 BUG/MEDIUM: cache: make sure not to cache requests with absolute-uri If a request contains an absolute URI and gets its Host header field rewritten, or just the request's URI without touching the Host header field, it can lead to different Host and authority parts. The cache will always concatenate the Host and the path while a server behind would instead ignore the Host and use the authority found in the URI, leading to incorrect content possibly being cached. Let's simply refrain from caching absolute requests for now, which also matches what the comment at the top of the function says. Later we can improve this by having a special handling of the authority. This should be backported as far as 1.8. (cherry picked from commit 22c6107dba1127a1e6d204dc2a6da63c09f2d934) [wt: context; added the legacy-mode version as well] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit e77564496d252762ecb1b25022774f134e90c7ac) [wt: context adjustments; s/http_get_stline/http_find_stline] Signed-off-by: Willy Tarreau <w@1wt.eu> commit b30a439cd772a0d04e96c9b6b2b49b3f438f9d94 Author: Willy Tarreau <w@1wt.eu> Date: Tue Oct 1 10:12:00 2019 +0200 BUG/MEDIUM: mux-h2: do not enforce timeout on long connections Alexandre Derumier reported issue #308 in which the client timeout will strike on an H2 mux when it's shorter than the server's response time. What happens in practice is that there is no activity on the connection and there's no data pending on output so we can expire it. But this does not take into account the possibility that some streams are in fact waiting for the data layer above. So what we do now is that we enforce the timeout when: - there are no more streams - some data are pending in the output buffer - some streams are blocked on the connection's flow control - some streams are blocked on their own flow control - some streams are in the send/sending list In all other cases the connection will not timeout as it means that some streams are actively used by the data layer. This fix must be backported to 2.0, 1.9 and probably 1.8 as well. It depends on the new "blocked_list" field introduced by "MINOR: mux-h2: add a per-connection list of blocked streams". It would be nice to also backport "ebtree: make eb_is_empty() and eb_is_dup() take a const" to avoid a build warning. (cherry picked from commit c2ea47fb18664ac68d94da2fe0b30e1a626aa869) [wt: adjusted context] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit 534d8f99552b05dc07da744749f27ba684c14924) [wt: replace task_destroy()->task_delete()+task_free(); s/br_data/b_data/] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 204ee192ed42ac0d190a139490144bd55bd963e5 Author: Willy Tarreau <w@1wt.eu> Date: Wed Oct 2 15:21:58 2019 +0200 BUILD: ebtree: make eb_is_empty() and eb_is_dup() take a const For whatever absurd reason these ones do not take a const, resulting in some haproxy functions being forced to confusingly use variables instead of const arguments. Let's fix this and backport it to older versions. (cherry picked from commit 43be340a0e4d9194ea06514d8d5cfb6964381dfc) [wt: dependency for next patch] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit cdd2c36291e057e505a25f10c128ae90c946891a) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 9411bc8aac8cc073600d2d6167922dc416c370fe Author: Willy Tarreau <w@1wt.eu> Date: Wed Oct 2 10:49:59 2019 +0200 MINOR: mux-h2: add a per-connection list of blocked streams Currently the H2 mux doesn't have a list of all the streams blocking on the H2 side. It only knows about those trying to send or waiting for a connection window update. It is problematic to enforce timeouts because we never know if a stream has to live as long as the data layer wants or has to be timed out becase it's waiting for a stream window update. This patch adds a new list, "blocked_list", to store streams blocking on stream flow control, or later, dependencies. Streams blocked on sfctl are now added there. It doesn't modify the rest of the logic. (cherry picked from commit 9edf6dbecc9d50d3ab3764b307010a627e39117c) [wt: dependency for next patch; removed traces] Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit bda92dd1a50d47ba993bc5ad1af6e73a85f8f388) [wt: minor context adjustments] Signed-off-by: Willy Tarreau <w@1wt.eu> commit a1ed96379f01943a0989bbcf8ca05e22e6f983f9 Author: Tim Duesterhus <tim@bastelstu.be> Date: Sun Sep 29 23:03:07 2019 +0200 BUG/MINOR: lua: Properly initialize the buffer's fields for string samples in hlua_lua2(smp|arg) `size` is used in conditional jumps and valgrind complains: ==24145== Conditional jump or move depends on uninitialised value(s) ==24145== at 0x4B3028: smp_is_safe (sample.h:98) ==24145== by 0x4B3028: smp_make_safe (sample.h:125) ==24145== by 0x4B3028: smp_to_stkey (stick_table.c:936) ==24145== by 0x4B3F2A: sample_conv_in_table (stick_table.c:1113) ==24145== by 0x420AD4: hlua_run_sample_conv (hlua.c:3418) ==24145== by 0x54A308F: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0) ==24145== by 0x54AFEFC: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0) ==24145== by 0x54A29F1: ??? (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0) ==24145== by 0x54A3523: lua_resume (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0) ==24145== by 0x426433: hlua_ctx_resume (hlua.c:1097) ==24145== by 0x42D7F6: hlua_action (hlua.c:6218) ==24145== by 0x43A414: http_req_get_intercept_rule (http_ana.c:3044) ==24145== by 0x43D946: http_process_req_common (http_ana.c:500) ==24145== by 0x457892: process_stream (stream.c:2084) Found while investigating issue #306. A variant of this issue exists since 55da165301b4de213dacf57f1902c2142e867775, which was using the old `chunk` API instead of the `buffer` API thus this patch must be backported to HAProxy 1.6 and higher. (cherry picked from commit 29d2e8aa9abe48539607692ba69a6a5fb4e96ca8) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit f371b834c61394196b3c3cb4c76cabbe80f9e6fe) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 16f17c207713cfda2798231802fc3da4ae460d9b Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Sep 27 16:30:59 2019 +0200 [RELEASE] Released version 1.9.11 Released version 1.9.11 with the following main changes : - BUG/MINOR: lua: fix setting netfilter mark - BUG/MEDIUM: lua: Fix test on the direction to set the channel exp timeout - BUG/MEDIUM: mux-h1: do not truncate trailing 0CRLF on buffer boundary - DOC: fixed typo in management.txt - BUG/MINOR: mworker: disable SIGPROF on re-exec - BUG/MEDIUM: listener/threads: fix an AB/BA locking issue in delete_listener() - BUG/MEDIUM: url32 does not take the path part into account in the returned hash. - BUG/MEDIUM: proto-http: Always start the parsing if there is no outgoing data - BUG/MINOR: http-ana: Reset response flags when 1xx messages are handled - BUG/MINOR: h1: Properly reset h1m when parsing is restarted - BUG/MEDIUM: cache: Don't cache objects if the size of headers is too big - BUG/MINOR: lb/leastconn: ignore the server weights for empty servers - BUG/MEDIUM: connection: don't keep more idle connections than ever needed - MINOR: stats: report the number of idle connections for each server - BUG/MINOR: listener: Fix a possible null pointer dereference - MEDIUM: checks: Make sure we unsubscribe before calling cs_destroy(). - BUG/MEDIUM: http: also reject messages where "chunked" is missing from transfer-enoding - BUG/MINOR: filters: Properly set the HTTP status code on analysis error - BUG/MINOR: acl: Fix memory leaks when an ACL expression is parsed - BUG/MINOR: Missing stat_field_names (since f21d17bb) - BUG/MAJOR: mux-h2: Handle HEADERS frames received after a RST_STREAM frame - BUG/MEDIUM: check/threads: make external checks run exclusively on thread 1 - BUG/MINOR: mux-h2: do not wake up blocked streams before the mux is ready - BUG/MEDIUM: namespace: close open namespaces during soft shutdown - BUG/MEDIUM: mux-h2: don't reject valid frames on closed streams - BUG/MINOR: mux-h2: Use the dummy error when decoding headers for a closed stream - BUG/MAJOR: mux_h2: Don't consume more payload than received for skipped frames - MINOR: tools: implement my_flsl() - BUG/MEDIUM: spoe: Use a different engine-id per process - MINOR: spoe: Improve generation of the engine-id - MINOR: spoe: Support the async mode with several threads - DOC: Fix documentation about the cli command to get resolver stats - BUG/MEDIUM: namespace: fix fd leak in master-worker mode commit a7f2abde42a04faba327b552d38749fe001e22b5 Author: Krisztián Kovács (kkovacs) <Krisztian.Kovacs@oneidentity.com> Date: Fri Sep 20 14:48:19 2019 +0000 BUG/MEDIUM: namespace: fix fd leak in master-worker mode When namespaces are used in the configuration, the respective namespace handles are opened during config parsing and stored in an ebtree for lookup later. Unfortunately, when the master process re-execs itself these file descriptors were not closed, effectively leaking the fds and preventing destruction of namespaces no longer present in the configuration. This change fixes this issue by opening the namespace file handles as close-on-exec, making sure that they will be closed during re-exec. (cherry picked from commit 538aa7168fca1adf2ecd0aa4a47e6b8856275f55) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 59af61a19493ccc50e3815d84c9323762cf28fcd) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 61a74eaf84dad6d04986e517a71e172b53a15f80 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Sep 27 10:45:47 2019 +0200 DOC: Fix documentation about the cli command to get resolver stats In the management guide, this command was still referenced as "show stat resolvers" instead of "show resolvers". The cli command was fixed by the commit ff88efbd7 ("BUG/MINOR: dns: Fix CLI keyword declaration"). This patch fixes the issue #296. It can be backported as fas as 1.7. (cherry picked from commit 78c430616552e024fc1e7a6650302702ae4544d1) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 7a5303a392ffa16de95f21b5d1ce4acb9a1778cf) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit a65232a1585760d556c461821cbc5f3711692fb3 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Tue Sep 17 11:55:52 2019 +0200 MINOR: spoe: Support the async mode with several threads A different engine-id is now generated for each thread. So, it is possible to enable the async mode with several threads. This patch may be backported to older versions. (cherry picked from commit b1bb1afa4741a20e5bf954f0065ae7b747a3e219) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 46c76822351ae4572fd28392838fff35e2e2a7d8) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 5a51b71e61a4a3cd6a6c9add4baa805744fbbbca Author: Christopher Faulet <cfaulet@haproxy.com> Date: Tue Sep 17 15:07:02 2019 +0200 MINOR: spoe: Improve generation of the engine-id Use the same algo than the sample fetch uuid(). This one was added recently. So it is better to use the same way to generate UUIDs. This patch may be backported to older versions. (cherry picked from commit 09bd9aa412d67cfd326b60ade1510f1d4c6344a9) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit eaf11915d74d184a656c83596650a6dfabb2fc1a) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit a8ec7eeca04953c274fcf187a090e316e4211465 Author: Kevin Zhu <ipandtcp@gmail.com> Date: Tue Sep 17 15:05:45 2019 +0200 BUG/MEDIUM: spoe: Use a different engine-id per process SPOE engine-id is the same for all processes when nbproc is more than 1. So, in async mode, an agent receiving a NOTIFY frame from a process may send the ACK to another process. It is abviously wrong. A different engine-id must be generated for each process. This patch must be backported to 2.0, 1.9 and 1.8. (cherry picked from commit d87b1a56d526568b55ee33b77f77c87455026ae1) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 652c8e238537373d69aff5c0608e35a9f373dd05) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 919653f0419579c825c99488adfff7ac5975dc63 Author: Willy Tarreau <w@1wt.eu> Date: Tue Mar 5 12:04:55 2019 +0100 MINOR: tools: implement my_flsl() We already have my_ffsl() to find the lowest bit set in a word, and this patch implements the search for the highest bit set in a word. On x86 it uses the bsr instruction and on other architectures it uses an efficient implementation. (cherry picked from commit d87a67f9bc422bf41b6b81c1e99d9aebbbc18d8e) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> [Cf: This one is required to backport some patches on the SPOE. It is just a new function, so there is no impact] commit 35746784c1f0306e49f793012acb045cb55df55d Author: Christopher Faulet <cfaulet@haproxy.com> Date: Thu Sep 26 16:38:28 2019 +0200 BUG/MAJOR: mux_h2: Don't consume more payload than received for skipped frames When a frame is received for a unknown or already closed stream, it must be skipped. This also happens when a stream error is reported. But we must be sure to only skip received data. In the loop in h2_process_demux(), when such frames are handled, all the frame lenght is systematically skipped. If the frame payload is partially received, it leaves the demux buffer in an undefined state. Because of this bug, all sort of errors may be observed, like crash or intermittent freeze. This patch must be backported to 2.0, 1.9 and 1.8. (cherry picked from commit 5112a603d9507cac84ae544863251e814e5eb8d8) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit e12a26f6b4f7d0f2cf49b24eeb2c5cb8218cc974) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 67f0c102741590a7ba777fad2d4cfc9fa1ea4719 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Thu Sep 26 16:19:13 2019 +0200 BUG/MINOR: mux-h2: Use the dummy error when decoding headers for a closed stream Since the commit 6884aa3e ("BUG/MAJOR: mux-h2: Handle HEADERS frames received after a RST_STREAM frame"), HEADERS frames received for an unknown or already closed stream are decoded. Once decoded, an error is reported for the stream. But because it is a dummy stream (h2_closed_stream), its state cannot be changed. So instead, we must return the dummy error stream (h2_error_stream). This patch must be backported to 2.0 and 1.9. (cherry picked from commit ea7a7781a94addb9fb18ef8064c96d73fe5add3d) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 0342705074569b5558036101f0bd0e00eab56632) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 322cbafd5d3b06f3f3ee6619d9397820c2cdc250 Author: Willy Tarreau <w@1wt.eu> Date: Thu Sep 26 08:47:15 2019 +0200 BUG/MEDIUM: mux-h2: don't reject valid frames on closed streams Consecutive to commit 6884aa3eb0 ("BUG/MAJOR: mux-h2: Handle HEADERS frames received after a RST_STREAM frame") some valid frames on closed streams (RST_STREAM, PRIORITY, WINDOW_UPDATE) were now rejected. It turns out that the previous condition was in fact intentional to catch only sensitive frames, which was indeed a mistake since these ones needed to be decoded to keep HPACK synchronized. But we must absolutely accept WINDOW_UPDATES or we risk to stall some transfers. And RST/PRIO definitely are valid. Let's adjust the condition to reflect that and update the comment to explain the reason for this unobvious condition. This must be backported to 2.0 and 1.9 after the commit above is brought there. (cherry picked from commit 4c08f12dd86bda574c15261bcd69135dd662f990) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit eb5be9d6e787a404e4fbf14ae9285bb969d37196) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 5a861225731bf181bef3e0b6b92490dd836bd52c Author: Krisztian Kovacs <krisztian.kovacs@oneidentity.com> Date: Tue Sep 24 14:12:13 2019 +0200 BUG/MEDIUM: namespace: close open namespaces during soft shutdown When doing a soft shutdown, we won't be making new connections anymore so there's no point in keeping the namespace file descriptors open anymore. Keeping these open effectively makes it impossible to properly clean up namespaces which are no longer used in the new configuration until all previously opened connections are closed in the old worker process. This change introduces a cleanup function that is called during soft shutdown that closes all namespace file descriptors by iterating over the namespace ebtree. (cherry picked from commit 710d987cd62ab0779418f14aa2168dc10ef6bac7) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 6d215536f4aa2e3c95fde9d001a1c894d4eecb93) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 3a311921f391480e42ac6d80a754e323888f52df Author: Willy Tarreau <w@1wt.eu> Date: Wed Sep 25 07:57:31 2019 +0200 BUG/MINOR: mux-h2: do not wake up blocked streams before the mux is ready In h2_send() we used to scan pending streams and wake them up when it's possible to send, without considering the connection's state. Thus caused some excess failed calls to h2_snd_buf() during the preface on backend connections : [01|h2|4|mux_h2.c:3562] h2_wake(): entering : h2c=0x7f1430032ed0(B,PRF) [01|h2|4|mux_h2.c:3475] h2_process(): entering : h2c=0x7f1430032ed0(B,PRF) [01|h2|4|mux_h2.c:3326] h2_send(): entering : h2c=0x7f1430032ed0(B,PRF) [01|h2|4|mux_h2.c:3152] h2_process_mux(): entering : h2c=0x7f1430032ed0(B,PRF) [01|h2|4|mux_h2.c:1508] h2c_bck_send_preface(): entering : h2c=0x7f1430032ed0(B,PRF) [01|h2|4|mux_h2.c:1379] h2c_send_settings(): entering : h2c=0x7f1430032ed0(B,PRF) [01|h2|4|mux_h2.c:1464] h2c_send_settings(): leaving : h2c=0x7f1430032ed0(B,PRF) [01|h2|4|mux_h2.c:1543] h2c_bck_send_preface(): leaving : h2c=0x7f1430032ed0(B,PRF) [01|h2|4|mux_h2.c:3241] h2_process_mux(): leaving : h2c=0x7f1430032ed0(B,STG) [01|h2|3|mux_h2.c:3384] sent data : h2c=0x7f1430032ed0(B,STG) >>> streams woken up here [01|h2|4|mux_h2.c:3428] h2_send(): waking up pending stream : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:3435] h2_send(): leaving with everything sent : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:3326] h2_send(): entering : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:3152] h2_process_mux(): entering : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:3241] h2_process_mux(): leaving : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:3435] h2_send(): leaving with everything sent : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:3552] h2_process(): leaving : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:3564] h2_wake(): leaving >>> I/O callback was already scheduled and called despite having nothing left to do [01|h2|4|mux_h2.c:3454] h2_io_cb(): entering : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:3326] h2_send(): entering : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:3152] h2_process_mux(): entering : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:3241] h2_process_mux(): leaving : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:3435] h2_send(): leaving with everything sent : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:3463] h2_io_cb(): leaving >>> stream tries and fails again here! [01|h2|4|mux_h2.c:5568] h2_snd_buf(): entering : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:5587] h2_snd_buf(): connection not ready, leaving : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:5398] h2_subscribe(): entering : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:5408] h2_subscribe(): subscribe(send) : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:5422] h2_subscribe(): leaving : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:5475] h2_rcv_buf(): entering : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:5535] h2_rcv_buf(): leaving : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:5398] h2_subscribe(): entering : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:5400] h2_subscribe(): subscribe(recv) : h2c=0x7f1430032ed0(B,STG) [01|h2|4|mux_h2.c:5422] h2_subscribe(): leaving : h2c=0x7f1430032ed0(B,STG) This can happen when sending the preface, the settings, and the settings ACK. Let's simply condition the wake up on st0 >= FRAME_H as is done at other places. (cherry picked from commit cec60056e495dc859ca6ebebfd8fa6b5b031ffa5) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit c51d47c92e35f18f8b2a8b8fb8b27e01545c9797) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit c4c0057b89e0de93521a6caa5a3cd4d375357dc4 Author: Willy Tarreau <w@1wt.eu> Date: Tue Sep 3 18:55:02 2019 +0200 BUG/MEDIUM: check/threads: make external checks run exclusively on thread 1 See GH issues #141 for all the context. In short, registered signal handlers are not inherited by other threads during startup, which is normally not a problem, except that we need that the same thread as the one doing the fork() cleans up the old process using waitpid() once its death is reported via SIGCHLD, as happens in external checks. The only simple solution to this at the moment is to make sure that external checks are exclusively run on the first thread, the one which registered the signal handlers on startup. It will be far more than enough anyway given that external checks must not require to be load balanced on multiple threads! A more complex solution could be designed over the long term to let each thread deal with all signals but it sounds overkill. This must be backported as far as 1.8. (cherry picked from commit 6dd4ac890b5810b0f0fe81725fda05ad3d052849) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit b143711afe833f9824a7372b88ef9435ff240e9a) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit c50865c0f5d97c2a18212c64e57cd8aa4c25f92f Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon Sep 23 15:28:20 2019 +0200 BUG/MAJOR: mux-h2: Handle HEADERS frames received after a RST_STREAM frame As stated in the RFC7540#5.1, an endpoint that receives any frame other than PRIORITY after receiving a RST_STREAM MUST treat that as a stream error of type STREAM_CLOSED. However, frames carrying compression state must still be processed before being dropped to keep the HPACK decoder synchronized. This had to be the purpose of the commit 8d9ac3ed8b ("BUG/MEDIUM: mux-h2: do not abort HEADERS frame before decoding them"). But, the test on the frame type was inverted. This bug is major because desynchronizing the HPACK decoder leads to mixup indexed headers in messages. From the time an HEADERS frame is received and ignored for a closed stream, wrong headers may be sent to the following streams. This patch may fix several bugs reported on github (#116, #290, #292). It must be backported to 2.0 and 1.9. (cherry picked from commit 6884aa3eb00d1a5eb6f9c81a3a00288c13652938) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 96b88f2e605e76f2a472cf9fa83398ff242d47bb) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 5cf69f6360f345682341b4ef5f6e0218245226c3 Author: Adis Nezirovic <anezirovic@haproxy.com> Date: Fri Sep 13 11:43:03 2019 +0200 BUG/MINOR: Missing stat_field_names (since f21d17bb) Recently Lua code which uses Proxy class (get_stats method) stopped working ("table index is nil from [C] method 'get_stats'") It probably affects other codepaths too. This should be backported do 2.0 and 1.9. (cherry picked from commit a46b142e8807ea640e041d3a29e3fd427844d559) Signed-off-by: Willy Tarreau <w@1wt.eu> (cherry picked from commit eeea5702a58426d44d866ec6ebbfc4b7ac40696c) Signed-off-by: Willy Tarreau <w@1wt.eu> commit d84d945ac2a3d33044d7d56b8ec709d9e6a0aec3 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Sep 13 09:50:15 2019 +0200 BUG/MINOR: acl: Fix memory leaks when an ACL expression is parsed This only happens during the configuration parsing. First leak is the string representing the last converter parsed, if any. The second one is on the error path, when the allocation of the ACL expression failed. In this case, the sample was not released. This patch fixes the issue #256. It must be backported to all stable versions. (cherry picked from commit 361935aa1e327d2249453eab0b8f0300683f47b2) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit 6a4c746b63c89c7d4c5f21d79ceb45207ebb24bb) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit c7db58bd3ee59153db8e9326154b6fbb1181a756 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Sep 6 15:24:55 2019 +0200 BUG/MINOR: filters: Properly set the HTTP status code on analysis error When a filter returns an error during the HTTP analysis, an error must be returned if the status code is not already set. On the request path, an error 400 is returned. On the response path, an error 502 is returned. The status is considered as unset if its value is not strictly positive. If needed, this patch may be backported to all versions having filters (as far as 1.7). Because nobody have never report any bug, the backport to 2.0 is probably enough. (cherry picked from commit e058f7359f3822beb8552f77a6d439cb053edb3f) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> (cherry picked from commit eab13f042b9a98cadb215a2c29f2ee9164c18f19) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 5513fcaa601dd344be548430fc1760dbedebf4f2 Author: Willy Tarreau <w@1wt.eu> Date: Thu Sep 12 14:01:40 2019 +0200 BUG/MEDIUM: http: also reject messages where "chunked" is missing from transfer-enoding Nathan Davison (@ndavison) reported that in legacy mode we don't correctly reject requests or responses featuring a transfer-encoding header missing the "chunked" value. As mandated in the protocol spec, the test verifies that "chunked" is the last one, but only does so when it is present. As such, "transfer-encoding: foobar" is not rejected, only "transfer-encoding: chunked, foobar" will be. The impact is limited, but if combined with "http-reuse always", it could be used as a help to construct a content smuggling attack against a vulnerable component employing a lenient parser which would ignore the content-length header as soon as it sees a transfer-encoding one, without even parsing it. In this case haproxy would fail to protect it. The…
jiangwenyuan
pushed a commit
that referenced
this issue
Feb 14, 2020
Squashed commit of the following: commit efac87eec5e099f4de499e6a709a07829733329e Author: Willy Tarreau <w@1wt.eu> Date: Fri Nov 15 17:26:16 2019 +0100 [RELEASE] Released version 2.0.9 Released version 2.0.9 with the following main changes : - MINOR: config: warn on presence of "\n" in header values/replacements - BUG/MINOR: mux-h2: do not emit logs on backend connections - MINOR: tcp: avoid confusion in time parsing init - BUG/MINOR: cli: don't call the kw->io_release if kw->parse failed - BUG/MINOR: mux-h2: Don't pretend mux buffers aren't full anymore if nothing sent - BUG/MAJOR: stream-int: Don't receive data from mux until SI_ST_EST is reached - BUG/MINOR: spoe: fix off-by-one length in UUID format string - MINOR: mux: Add a new method to get informations about a mux. - BUG/MEDIUM: stream_interface: Only use SI_ST_RDY when the mux is ready. - BUG/MEDIUM: servers: Only set SF_SRV_REUSED if the connection if fully ready. - BUG/MINOR: config: Update cookie domain warn to RFC6265 - BUG/MEDIUM: mux-h2: report no available stream on a connection having errors - BUG/MEDIUM: mux-h2: immediately remove a failed connection from the idle list - BUG/MEDIUM: mux-h2: immediately report connection errors on streams - BUG/MEDIUM: mux-h1: Disable splicing for chunked messages - BUG/MEDIUM: stream: Be sure to support splicing at the mux level to enable it - MINOR: doc: http-reuse connection pool fix - BUG/MEDIUM: stream: Be sure to release allocated captures for TCP streams - BUG/MINOR: action: do-resolve now use cached response - BUG: dns: timeout resolve not applied for valid resolutions - DOC: management: document reuse and connect counters in the CSV format - DOC: management: document cache_hits and cache_lookups in the CSV format - DOC: management: fix typo on "cache_lookups" stats output - BUG/MINOR: queue/threads: make the queue unlinking atomic - BUG/MEDIUM: listeners: always pause a listener on out-of-resource condition - BUG/MEDIUM: Make sure we leave the session list in session_free(). - CLEANUP: session: slightly simplify idle connection cleanup logic - MINOR: memory: also poison the area on freeing - BUILD: contrib/da: remove an "unused" warning - BUG/MINOR: log: limit the size of the startup-logs - BUG/MEDIUM: filters: Don't call TCP callbacks for HTX streams - BUG/MINOR: mux-h1: Don't set CS_FL_EOS on a read0 when receiving data to pipe commit 1909f505aaf8d1d1e691f1eadbe9fdbadd7c0cc9 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Nov 15 09:41:32 2019 +0100 BUG/MINOR: mux-h1: Don't set CS_FL_EOS on a read0 when receiving data to pipe This is mandatory to process input one more time to add the EOM in the HTX message and to set CS_FL_EOI on the conn-stream. Otherwise, in the stream, a SHUTR will be reported on the corresponding channel without the EOI. It may be erroneously interpreted as an abort. This patch must be backported to 2.0 and 1.9. (cherry picked from commit 3f21611bddc40099e0fa4b1b196ee3b691fe7c81) [wt: adjusted context (no traces)] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 166bd755f38f1577caa3565fad6b5652a61a5680 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Nov 8 15:31:49 2019 +0100 BUG/MEDIUM: filters: Don't call TCP callbacks for HTX streams For now, TCP callbacks are incompatible with the HTX streams because they are designed to manipulate raw buffers. A new callback will probably be added to be used in both modes, raw and HTX. So, for HTX streams, these callbacks are ignored. This should not be a real problem because there is no known filters, expect the trace filter, implementing these callbacks. This patch must be backported to 2.0 and 1.9. (cherry picked from commit bb9a7e04bd806cd78baf62eea0a84e1d8cd70573) Signed-off-by: Willy Tarreau <w@1wt.eu> commit d83e71cfbfe7db104018cc564da7b8518a816493 Author: Willy Tarreau <w@1wt.eu> Date: Fri Nov 15 16:00:12 2019 +0100 BUG/MINOR: log: limit the size of the startup-logs This is an alternative to mainline commit 869efd5eeb ("BUG/MINOR: log: make "show startup-log" use a ring buffer instead"). Instead of relying on a log buffer, it limits the size of logs to bufsize. This avoids taking O(N^2) time to start up in case a large config produces many warnings. Typically a 100k backend config with one warning each could take roughly 20 minutes just to produce the message which in the end was not retrievable over the CLI, while now it takes roughly 0.5s. The commit mentioned above requires backporting the ring infrastructure which seems overkill just to fix this. This must be backported to 1.9. commit dcc70052934ed2a16be304d29091a55b040e908d Author: Willy Tarreau <w@1wt.eu> Date: Fri Nov 15 13:39:16 2019 +0100 BUILD: contrib/da: remove an "unused" warning The rcsid variable is static an unused, causing a build warning. Let's just add __attribute__((unused)) to shut the warning. This may be backported to 2.0. (cherry picked from commit ed295cc3449537324a74059647fa35984bc78ab1) Signed-off-by: Willy Tarreau <w@1wt.eu> commit bbcce3dc3078012fe3874530f6564f87bf8bcc95 Author: Willy Tarreau <w@1wt.eu> Date: Fri Nov 15 06:59:54 2019 +0100 MINOR: memory: also poison the area on freeing Doing so sometimes helps detect some UAF situations without the overhead associated to the DEBUG_UAF define. (cherry picked from commit da52035a45d80485ca32a0b81651bba28cf00889) [wt: will be helpful in bug reports] Signed-off-by: Willy Tarreau <w@1wt.eu> commit 6571828db4ecf6612504a25b46da1a91b8af12c8 Author: Willy Tarreau <w@1wt.eu> Date: Fri Nov 15 07:04:24 2019 +0100 CLEANUP: session: slightly simplify idle connection cleanup logic Since previous commit a132e5efa9 ("BUG/MEDIUM: Make sure we leave the session list in session_free().") it's pointless to delete the conn element inside "if" blocks given that the second test is always true as well. Let's simplify this with a single LIST_DEL_INIT() before the test. (cherry picked from commit 5de7817ae874901dfe44838dd26dd10c2d822c1d) [wt: not strictly needed but makes the code more straightforward for future debugging] Signed-off-by: Willy Tarreau <w@1wt.eu> commit a93758c8ac0b01f19bd2110498f11cf252e06b1c Author: Olivier Houchard <ohouchard@haproxy.com> Date: Thu Nov 14 19:26:14 2019 +0100 BUG/MEDIUM: Make sure we leave the session list in session_free(). In session_free(), if we're about to destroy a connection that had no mux, make sure we leave the session_list before calling conn_free(). Otherwise, conn_free() would call session_unown_conn(), which would potentially free the associated srv_list, but session_free() also frees it, so that would lead to a double free, and random memory corruption. This should be backported to 1.9 and 2.0. (cherry picked from commit a132e5efa94c962144e78378403c566875a6d37e) Signed-off-by: Willy Tarreau <w@1wt.eu> commit eb1d4861c2460cb0ef3fd5c6acc6f38184f0fbbe Author: Willy Tarreau <w@1wt.eu> Date: Fri Nov 15 10:20:07 2019 +0100 BUG/MEDIUM: listeners: always pause a listener on out-of-resource condition A corner case was opened in the listener_accept() code by commit 3f0d02bbc2 ("MAJOR: listener: do not hold the listener lock in listener_accept()"). The issue is when one listener (or a group of) managed to eat all the proxy's or all the process's maxconn, and another listener tries to accept a new socket. This results in the atomic increment to detect the excess connection count and immediately abort, without pausing the listener, thus the call is immediately performed again. This doesn't happen when the test is run on a single listener because this listener got limited when crossing the limit. But with 2 or more listeners, we don't have this luxury. The solution consists in limiting the listener as soon as we have to decline accepting an incoming connection. This means that the listener will not be marked full yet if it gets the exact connection count but this is not a problem in practice since all other listeners will only be marked full after their first attempt. Thus from now on, a listener is only full once it has already failed taking an incoming connection. This bug was definitely responsible for the unreproduceable occasional reports of high CPU usage showing epoll_wait() returning immediately without accepting an incoming connection, like in bug #129. This fix must be backported to 1.9 and 1.8. (cherry picked from commit 93604edb652542f4149282438dc6e0548cd4d545) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 9410b928a384d4ff1eeab1443806a3c3eaf88b0f Author: Willy Tarreau <w@1wt.eu> Date: Thu Nov 14 14:58:39 2019 +0100 BUG/MINOR: queue/threads: make the queue unlinking atomic There is a very short race in the queues which happens in the following situation: - stream A on thread 1 is being processed by a server - stream B on thread 2 waits in the backend queue for a server - stream B on thread 2 is fed up with waiting and expires, calls stream_free() which calls pendconn_free(), which sees the stream attached - at the exact same instant, stream A finishes on thread 1, sees one stream is waiting (B), detaches it and wakes it up - stream B continues pendconn_free() and calls pendconn_unlink() - pendconn_unlink() now detaches the node again and performs a second deletion (harmless since idempotent), and decrements srv/px->nbpend again => the number of connections on the proxy or server may reach -1 if/when this race occurs. It is extremely tight as it can only occur during the test on p->leaf_p though it has been witnessed at least once. The solution consists in testing leaf_p again once the lock is held to make sure the element was not removed in the mean time. This should be backported to 2.0 and 1.9, probably even 1.8. (cherry picked from commit 9ada030697c945d0e4bcbc85870d6d25f33b76b0) Signed-off-by: Willy Tarreau <w@1wt.eu> commit b896b1a82f5aab8bce3bd6768b9650b860481053 Author: Willy Tarreau <w@1wt.eu> Date: Fri Nov 8 07:29:34 2019 +0100 DOC: management: fix typo on "cache_lookups" stats output The trailing "s" was missing. (cherry picked from commit 7297429fa59da935f85d48b4eb4d85458a5db878) Signed-off-by: Willy Tarreau <w@1wt.eu> commit b564f1cdc752d7ea2c3738a343c514dc304283af Author: Jérôme Magnin <jmagnin@haproxy.com> Date: Wed Jul 17 14:04:40 2019 +0200 DOC: management: document cache_hits and cache_lookups in the CSV format Counters for cache_hits and cache_lookups were added with commit a1214a50 ("MINOR: cache: report the number of cache lookups and cache hits") but not documented in management.txt. (cherry picked from commit 34ebb5cbab1801b413750ff8eb0025210a6d0123) Signed-off-by: Willy Tarreau <w@1wt.eu> commit b67a8f328a474925b449849b1fdeab5d0de48232 Author: Jérôme Magnin <jmagnin@haproxy.com> Date: Wed Jul 17 09:24:46 2019 +0200 DOC: management: document reuse and connect counters in the CSV format Counters for connect and reuse were added in the stats with commit f1573848 ("MINOR: backend: count the number of connect and reuse per server and per backend") but not documented the CSV format in management.txt (cherry picked from commit 708eb88845bf6a772e5adfdf9fd660a2b7c89636) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 3697f03e859f5af5a2d759e6c3d9e2816b61871c Author: Baptiste Assmann <bedis9@gmail.com> Date: Thu Nov 7 11:02:18 2019 +0100 BUG: dns: timeout resolve not applied for valid resolutions Documentation states that the interval between 2 DNS resolution is driven by "timeout resolve <time>" directive. From a code point of view, this was applied unless the latest status of the resolution was VALID. In such case, "hold valid" was enforce. This is a bug, because "hold" timers are not here to drive how often we want to trigger a DNS resolution, but more how long we want to keep an information if the status of the resolution itself as changed. This avoid flapping and prevent shutting down an entire backend when a DNS server is not answering. This issue was reported by hamshiva in github issue #345. Backport status: 1.8 (cherry picked from commit f50e1ac4442be41ed8b9b7372310d1d068b85b33) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 44af0d0db64cb2712835369a194a3edbc82851d4 Author: Baptiste Assmann <bedis9@gmail.com> Date: Wed Oct 30 16:06:53 2019 +0100 BUG/MINOR: action: do-resolve now use cached response As reported by David Birdsong on the ML, the HTTP action do-resolve does not use the DNS cache. Actually, the action is "registred" to the resolution for said name to be resolved and wait until an other requester triggers the it. Once the resolution is finished, then the action is updated with the result. To trigger this, you must have a server with runtime DNS resolution enabled and run a do-resolve action with the same fqdn AND they use the same resolvers section. This patch fixes this behavior by ensuring the resolution associated to the action has a valid answer which is not considered as expired. If those conditions are valid, then we can use it (it's the "cache"). Backport status: 2.0 (cherry picked from commit 7264dfe9495a7bfd784b8964508e4204b7e077af) Signed-off-by: Willy Tarreau <w@1wt.eu> commit b6af6b3650a6a1209ba503b6937afef2c08402e8 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Thu Nov 7 14:27:52 2019 +0100 BUG/MEDIUM: stream: Be sure to release allocated captures for TCP streams All TCP and HTTP captures are stored in 2 arrays, one for the request and another for the response. In HAPRoxy 1.5, these arrays are part of the HTTP transaction and thus are released during its cleanup. Because in this version, the transaction is part of the stream (in 1.5, streams are still called sessions), the cleanup is always performed, for HTTP and TCP streams. In HAProxy 1.6, the HTTP transaction was moved out from the stream and is now dynamically allocated only when required (becaues of an HTTP proxy or an HTTP sample fetch). In addition, still in 1.6, the captures arrays were moved from the HTTP transaction to the stream. This way, it is still possible to capture elements from TCP rules for a full TCP stream. Unfortunately, the release is still exclusively performed during the HTTP transaction cleanup. Thus, for a TCP stream where the HTTP transaction is not required, the TCP captures, if any, are never released. Now, all captures are released when the stream is freed. This fixes the memory leak for TCP streams. For streams with an HTTP transaction, the captures are now released when the transaction is reset and not systematically during its cleanup. This patch must be backported as fas as 1.6. (cherry picked from commit 5939925a3805f9755cff5a3b9635c1a533bc9184) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> [ Cf: changes in proto_http.c was moved from http_reset_txn() to http_end_txn_clean_session() because it is a better place to do some cleanup on the stream. But in 2.1, this last functions does not exist anymore. ] commit 79a5693c58056d946bb5a17c8818f606abfdee12 Author: Lukas Tribus <lukas@ltri.eu> Date: Wed Nov 6 11:50:25 2019 +0100 MINOR: doc: http-reuse connection pool fix Since 1.9 we actually do use a connection pool, configurable with pool-max-conn. Update the documentation in this regard. Must be backported to 1.9. (cherry picked from commit e8adfeb84b4a1845c41fd23126b80d64c7c59863) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 22b4eefa969bb741d75e78b5da457b0322213f69 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Tue Nov 5 16:49:23 2019 +0100 BUG/MEDIUM: stream: Be sure to support splicing at the mux level to enable it Despite the addition of the mux layer, no change have been made on how to enable the TCP splicing on process_stream(). We still check if transport layer on both sides support the splicing, but we don't check the muxes support. So it is possible to start to splice data with an unencrypted H2 connection on a side and an H1 connection on the other. This leads to a freeze of the stream until a client or server timeout is reached. This patch fixed a part of the issue #356. It must be backported as far as 1.8. (cherry picked from commit 276c1e0533e77008445d57a1953f1f516d66877d) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 2c411bed3a4d49f1e0ce6e20d2d968835490ff93 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Tue Nov 5 16:24:27 2019 +0100 BUG/MEDIUM: mux-h1: Disable splicing for chunked messages The mux H1 announces the support of the TCP splicing. It only works for payload data. It works for messages with an explicit content-length or for tunnelled data. For chunked messages, the mux H1 should normally not try to xfer more than the current chunk through the pipe. Unfortunately, this works on the read side but the send is completely bogus. During the output formatting, the announced size of chunks does not handle the size that will be spliced. Because there is no formatting when spliced data are sent, the produced message is malformed and rejected by the peer. For now, because it is quick and simple, the TCP splicing is disabled for chunked messages. I will try to enable it again in a proper way. I don't know for now if it will be backportable in previous versions. This will depend on the amount of changes required to handle it. This patch fixes a part of the issue #356. It must be backported to 2.0 and 1.9. (cherry picked from commit 9fa40c46df5f52692fe62be008131f8dfa4d83af) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 902df226ae000f6cb69e034491853d8cf462f458 Author: Willy Tarreau <w@1wt.eu> Date: Thu Oct 31 15:48:18 2019 +0100 BUG/MEDIUM: mux-h2: immediately report connection errors on streams In case a stream tries to send on a connection error, we must report the error so that the stream interface keeps the data available and may safely retry on another connection. Till now this would happen only before the connection was established, not in case of a failed handshake or an early GOAWAY for example. This should be backported to 2.0 and 1.9. (cherry picked from commit cab2295ae71150d6722505945463b3f1d4627e6e) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit ab66e15bbc95ae362c124f400dfebd58438e65b2 Author: Willy Tarreau <w@1wt.eu> Date: Thu Oct 31 15:36:30 2019 +0100 BUG/MEDIUM: mux-h2: immediately remove a failed connection from the idle list If a connection faces an error or a timeout, it must be removed from its idle list ASAP. We certainly don't want to risk sending new streams on it. This should be backported to 2.0 (replacing MT_LIST_DEL with LIST_DEL_LOCKED) and 1.9 (there's no lock there, the idle lists are per-thread and per-server however a LIST_DEL_INIT will be needed). (cherry picked from commit 4481e26e5dd2bc04df494c3f176aa5ceea3d63d5) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit d63b85da9b7a3dff7ce1361e038ee114f41481fb Author: Willy Tarreau <w@1wt.eu> Date: Thu Oct 31 15:10:03 2019 +0100 BUG/MEDIUM: mux-h2: report no available stream on a connection having errors If an H2 mux has met an error, we must not report available streams anymore, or it risks to accumulate new streams while not being able to process them. This should be backported to 2.0 and 1.9. (cherry picked from commit c61966f9b468b72528f854f4bc64bb5934751384) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 72c48ce7f0e5a8811148c46b557032cbd4bb76d3 Author: Joao Morais <jcmoraisjr@gmail.com> Date: Wed Oct 30 21:04:00 2019 -0300 BUG/MINOR: config: Update cookie domain warn to RFC6265 The domain option of the cookie keyword allows to define which domain or domains should use the the cookie value of a cookie-based server affinity. If the domain does not start with a dot, the user agent should only use the cookie on hosts that matches the provided domains. If the configured domain starts with a dot, the user agent can use the cookie with any host ending with the configured domain. haproxy config parser helps the admin warning about a potentially buggy config: defining a domain without an embedded dot which does not start with a dot, which is forbidden by the RFC. The current condition to issue the warning implements RFC2109. This change updates the implementation to RFC6265 which allows domain without a leading dot. Should be backported to all supported versions. The feature exists at least since 1.5. (cherry picked from commit e1583751b67704f297060afaabe87fd7d8d602a2) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 36deb2108dbc9a0720b18ca997fff529376c3dc2 Author: Olivier Houchard <ohouchard@haproxy.com> Date: Fri Oct 25 17:00:54 2019 +0200 BUG/MEDIUM: servers: Only set SF_SRV_REUSED if the connection if fully ready. In connect_server(), if we're reusing a connection, only use SF_SRV_REUSED if the connection is fully ready. We may be using a multiplexed connection created by another stream that is not yet ready, and may fail. If we set SF_SRV_REUSED, process_stream() will then not wait for the timeout to expire, and retry to connect immediately. This should be backported to 1.9 and 2.0. This commit depends on 55234e33708c5a584fb9efea81d71ac47235d518. (cherry picked from commit e8f5f5d8b228d71333fb60229dc908505baf9222) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 89ed9eca1eec07d0ef24bfe24c7d074e942c97cf Author: Olivier Houchard <ohouchard@haproxy.com> Date: Fri Oct 25 16:25:20 2019 +0200 BUG/MEDIUM: stream_interface: Only use SI_ST_RDY when the mux is ready. In si_connect(), only switch the strema_interface status to SI_ST_RDY if we're reusing a connection and if the connection's mux is ready. Otherwise, maybe we're reusing a connection that is not fully established yet, and may fail, and setting SI_ST_RDY would mean we would not be able to retry to connect. This should be backported to 1.9 and 2.0. This commit depends on 55234e33708c5a584fb9efea81d71ac47235d518. (cherry picked from commit 6e8e2ec8494f3ed92f0c80c8382f80072384a4f3) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 198d1293f9f364bdbd918dddeee225d1ce402a9a Author: Olivier Houchard <ohouchard@haproxy.com> Date: Fri Oct 25 16:19:26 2019 +0200 MINOR: mux: Add a new method to get informations about a mux. Add a new method, ctl(), to muxes. It uses a "enum mux_ctl_type" to let it know which information we're asking for, and can output it either directly by returning the expected value, or by using an optional argument. "output" argument. Right now, the only known mux_ctl_type is MUX_STATUS, that will return 0 if the mux is not ready, or MUX_STATUS_READY if the mux is ready. We probably want to backport this to 1.9 and 2.0. (cherry picked from commit 9b8e11e691619b9cc0336f57bcdfacb015864a97) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 079ec126385bb84cca936f0d17c5d98252872e93 Author: Willy Tarreau <w@1wt.eu> Date: Tue Oct 29 10:25:49 2019 +0100 BUG/MINOR: spoe: fix off-by-one length in UUID format string The per-thread UUID string produced by generate_pseudo_uuid() could be off by one character due to too small of size limit in snprintf(). In practice the UUID remains large enough to avoid any collision though. This should be backported to 2.0 and 1.9. (cherry picked from commit 4fd6d671b239942c93a2f48850b32b9be150b1ba) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 27ebcefd41b3e44395c3fe71939ef98b03f98e7b Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Oct 25 10:21:01 2019 +0200 BUG/MAJOR: stream-int: Don't receive data from mux until SI_ST_EST is reached This bug is pretty pernicious and have serious consequences : In 2.1, an infinite loop in process_stream() because the backend stream-interface remains in the ready state (SI_ST_RDY). In 2.0, a call in loop to process_stream() because the stream-interface remains blocked in the connect state (SI_ST_CON). In both cases, it happens after a connection retry attempt. In 1.9, it seems to not happen. But it may be just by chance or just because it is harder to get right conditions to trigger the bug. However, reading the code, the bug seems to exist too. Here is how the bug happens in 2.1. When we try to establish a new connection to a server, the corresponding stream-interface is first set to the connect state (SI_ST_CON). When the underlying connection is known to be connected (the flag CO_FL_CONNECTED set), the stream-interface is switched to the ready state (SI_ST_RDY). It is a transient state between the connect state (SI_ST_CON) and the established state (SI_ST_EST). It must be handled on the next call to process_stream(), which is responsible to operate the transition. During all this time, errors can occur. A connection error or a client abort. The transient state SI_ST_RDY was introduced to let a chance to process_stream() to catch these errors before considering the connection as fully established. Unfortunatly, if a read0 is catched in states SI_ST_CON or SI_ST_RDY, it is possible to have a shutdown without transition to SI_ST_DIS (in fact, here, SI_ST_CON is swichted to SI_ST_RDY). This happens if the request was fully received and analyzed. In this case, the flag SI_FL_NOHALF is set on the backend stream-interface. If an error is also reported during the connect, the behavior is undefined because an error is returned to the client and a connection retry is performed. So on the next connection attempt to the server, if another error is reported, a client abort is detected. But the shutdown for writes was already done. So the transition to the state SI_ST_DIS is impossible. We stay in the state SI_ST_RDY. Because it is a transient state, we loop in process_stream() to perform the transition. It is hard to understand how the bug happens reading the code and even harder to explain. But there is a trivial way to hit the bug by sending h2 requests to a server only speaking h1. For instance, with the following config : listen tst bind *:80 server www 127.0.0.1:8000 proto h2 # in reality, it is a HTTP/1.1 server It is a configuration error, but it is an easy way to observe the bug. Note it may happen with a valid configuration. So, after a careful analyzis, it appears that si_cs_recv() should never be called for a not fully established stream-interface. This way the connection retries will be performed before reporting an error to the client. Thus, if a shutdown is performed because a read0 is handled, the stream-interface is inconditionnaly set to the transient state SI_ST_DIS. This patch must be backported to 2.0 and 1.9. However on these versions, this patch reveals a design flaw about connections and a bad way to perform the connection retries. We are working on it. (cherry picked from commit 04400bc7875fcc362495b0f25e75ba6fc2f44850) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 074230876d05bdf3fe33893889b326da14ab8ae9 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Thu Oct 24 10:31:01 2019 +0200 BUG/MINOR: mux-h2: Don't pretend mux buffers aren't full anymore if nothing sent In h2_send(), when something is sent, we remove the flags (H2_CF_MUX_MFULL|H2_CF_DEM_MROOM) on the h2 connection. This way, we are able to wake up all streams waiting to send data. Unfortunatly, these flags are unconditionally removed, even when nothing was sent. So if the h2c is blocked because the mux buffers are full and we are unable to send anything, all streams in the send_list are woken up for nothing. Now, we only remove these flags if at least a send succeeds. This patch must be backport to 2.0. (cherry picked from commit 69fe5cea213afd0c7465094e9dfead93143dcf3f) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit d4f20fadd9c3145de0eb5f5434f57b9fffc61062 Author: William Lallemand <wlallemand@haproxy.com> Date: Fri Oct 25 21:10:14 2019 +0200 BUG/MINOR: cli: don't call the kw->io_release if kw->parse failed The io_release() callback of the cli_kw is supposed to be used to clean what an io_handler() has made. It is called once the work in the IO handler is finished, or when the connection was aborted by the client. This patch fixes a bug where the io_release callback was called even when the parse() callback failed. Which means that the io_release() could called even if the io_handler() was not called. Should be backported in every versions that have a cli_kw->release(). (as far as 1.7) (cherry picked from commit 90b098c921e15f912dbde42658e34780f0ba446d) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 74a1e4393f7a7b194abb4f428fd02c7c088f6c67 Author: William Dauchy <w.dauchy@criteo.com> Date: Wed Oct 23 19:31:36 2019 +0200 MINOR: tcp: avoid confusion in time parsing init We never enter val_fc_time_value when an associated fetcher such as `fc_rtt` is called without argument. meaning `type == ARGT_STOP` will never be true and so the default `data.sint = TIME_UNIT_MS` will never be set. remove this part to avoid thinking default data.sint is set to ms while reading the code. Signed-off-by: William Dauchy <w.dauchy@criteo.com> [Cf: This patch may safely backported as far as 1.7. But no matter if not.] (cherry picked from commit b705b4d7d308d1132a772f3ae2d6113447022a60) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 21178a582238ee1c57d0aef73c97711741dd93ed Author: Willy Tarreau <w@1wt.eu> Date: Wed Oct 23 11:06:35 2019 +0200 BUG/MINOR: mux-h2: do not emit logs on backend connections The logs were added to the H2 mux so that we can report logs in case of errors that prevent a stream from being created, but as a side effect these logs are emitted twice for backend connections: once by the H2 mux itself and another time by the upper layer stream. It can even happen more with connection retries. This patch makes sure we do not emit logs for backend connections. It should be backported to 2.0 and 1.9. (cherry picked from commit 9364a5fda33a2f591d5e2640249a54af8955fb8b) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 41898a216e92c80c1354b67613834be1b3e97864 Author: Willy Tarreau <w@1wt.eu> Date: Fri Oct 25 14:16:14 2019 +0200 MINOR: config: warn on presence of "\n" in header values/replacements Yves Lafon reported an interesting case where an old rsprep rule used to conditionally append a header field by inserting a \n in the exising value was breaking H2 in HTX mode, with the browser rightfully reporting a PROTOCOL_ERROR when facing the \n. In legacy mode, since the response is first parsed again as an HTTP/1 message before being converted to H2 the issue does not happen. We should definitely discourage from using this old trick nowadays, http-request and http-response rules were made exactly to end this. Let's detect this and emit a warning when present. In 2.0 there is already a warning recalling that these rules are deprecated and which explains what to do instead, so the user now gets all the relevant information to convert them. There is no upstream commit ID for this patch because these rules were indeed removed from 2.1. This patch could be backported to 1.9 as it can also trigger the problem when HTX is enabled. commit 60e6020c8f2c9efc5f67208efaeafd09c719a29b Author: Willy Tarreau <w@1wt.eu> Date: Wed Oct 23 08:06:13 2019 +0200 [RELEASE] Released version 2.0.8 Released version 2.0.8 with the following main changes : - BUG/MINOR: stats: Add a missing break in a switch statement - BUG/MINOR: lua: Properly initialize the buffer's fields for string samples in hlua_lua2(smp|arg) - BUG/MEDIUM: lua: Store stick tables into the sample's `t` field - BUG/MINOR: action: do-resolve does not yield on requests with body - MINOR: mux-h2: add a per-connection list of blocked streams - BUILD: ebtree: make eb_is_empty() and eb_is_dup() take a const - BUG/MEDIUM: mux-h2: do not enforce timeout on long connections - BUG/MINOR: peers: crash on reload without local peer. - BUG/MEDIUM: cache: make sure not to cache requests with absolute-uri - DOC: clarify some points around http-send-name-header's behavior - DOC: fix typo in Prometheus exporter doc - MINOR: stats: mention in the help message support for "json" and "typed" - BUG/MEDIUM: applet: always check a fast running applet's activity before killing - BUG/MINOR: ssl: abort on sni allocation failure - BUG/MINOR: ssl: free the sni_keytype nodes - BUG/MINOR: ssl: abort on sni_keytypes allocation failure - BUILD: ssl: wrong #ifdef for SSL engines code - BUG/MEDIUM: htx: Catch chunk_memcat() failures when HTX data are formatted to h1 - BUG/MINOR: chunk: Fix tests on the chunk size in functions copying data - BUG/MINOR: mux-h1: Mark the output buffer as full when the xfer is interrupted - BUG/MINOR: mux-h1: Capture ignored parsing errors - BUG/MINOR: WURFL: fix send_log() function arguments - MINOR: version: make the version strings variables, not constants - BUG/MINOR: http-htx: Properly set htx flags on error files to support keep-alive - BUG/MINOR: mworker/ssl: close openssl FDs unconditionally - BUG/MINOR: tcp: Don't alter counters returned by tcp info fetchers - BUG/MEDIUM: mux_pt: Make sure we don't have a conn_stream before freeing. - BUG/MAJOR: idle conns: schedule the cleanup task on the correct threads - Revert e8826ded5fea3593d89da2be5c2d81c522070995. - BUG/MEDIUM: mux_pt: Don't destroy the connection if we have a stream attached. - BUG/MEDIUM: mux_pt: Only call the wake emthod if nobody subscribed to receive. - REGTEST: mcli/mcli_show_info: launch a 'show info' on the master CLI - CLEANUP: ssl: make ssl_sock_load_cert*() return real error codes - CLEANUP: ssl: make ssl_sock_put_ckch_into_ctx handle errcode/warn - CLEANUP: ssl: make ssl_sock_load_dh_params handle errcode/warn - CLEANUP: bind: handle warning label on bind keywords parsing. - BUG/MEDIUM: ssl: 'tune.ssl.default-dh-param' value ignored with openssl > 1.1.1 - BUG/MINOR: mworker/cli: reload fail with inherited FD - BUG/MINOR: ssl: Fix fd leak on error path when a TLS ticket keys file is parsed - BUG/MINOR: stick-table: Never exceed (MAX_SESS_STKCTR-1) when fetching a stkctr - BUG/MINOR: cache: alloc shctx after check config - BUG/MINOR: sample: Make the `field` converter compatible with `-m found` - BUG/MINOR: mux-h2: also make sure blocked legacy connections may expire - BUG/MEDIUM: http: unbreak redirects in legacy mode - BUG/MINOR: ssl: fix memcpy overlap without consequences. - BUG/MINOR: stick-table: fix an incorrect 32 to 64 bit key conversion - BUG/MEDIUM: pattern: make the pattern LRU cache thread-local and lockless commit 7fdd81c43fd75349d4496649d2176ad258e55a4b Author: Willy Tarreau <w@1wt.eu> Date: Wed Oct 23 06:59:31 2019 +0200 BUG/MEDIUM: pattern: make the pattern LRU cache thread-local and lockless As reported in issue #335, a lot of contention happens on the PATLRU lock when performing expensive regex lookups. This is absurd since the purpose of the LRU cache was to have a fast cache for expressions, thus the cache must not be shared between threads and must remain lockless. This commit makes the LRU cache thread-local and gets rid of the PATLRU lock. A test with 7 threads on 4 cores climbed from 67kH/s to 369kH/s, or a scalability factor of 5.5. Given the huge performance difference and the regression caused to users migrating from processes to threads, this should be backported at least to 2.0. Thanks to Brian Diekelman for his detailed report about this regression. (cherry picked from commit 403bfbb130f9fb31e52d441ebc1f8227f6883c22) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 6fe22ed08a642d27f1a228c6f3b7f9f0dd0ea4cd Author: Willy Tarreau <w@1wt.eu> Date: Wed Oct 23 06:21:05 2019 +0200 BUG/MINOR: stick-table: fix an incorrect 32 to 64 bit key conversion As reported in issue #331, the code used to cast a 32-bit to a 64-bit stick-table key is wrong. It only copies the 32 lower bits in place on little endian machines or overwrites the 32 higher ones on big endian machines. It ought to simply remove the wrong cast dereference. This bug was introduced when changing stick table keys to samples in 1.6-dev4 by commit bc8c404449 ("MAJOR: stick-tables: use sample types in place of dedicated types") so it the fix must be backported as far as 1.6. (cherry picked from commit 28c63c15f572a1afeabfdada6a0a4f4d023d05fc) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 7b34de3f4ccb3db391a416ef1796cc0a35b11712 Author: Emeric Brun <ebrun@haproxy.com> Date: Tue Oct 8 18:27:37 2019 +0200 BUG/MINOR: ssl: fix memcpy overlap without consequences. A trick is used to set SESSION_ID, and SESSION_ID_CONTEXT lengths to 0 and avoid ASN1 encoding of these values. There is no specific function to set the length of those parameters to 0 so we fake this calling these function to a different value with the same buffer but a length to zero. But those functions don't seem to check the length of zero before performing a memcpy of length zero but with src and dst buf on the same pointer, causing valgrind to bark. So the code was re-work to pass them different pointers even if buffer content is un-used. In a second time, reseting value, a memcpy overlap happened on the SESSION_ID_CONTEXT. It was re-worked and this is now reset using the constant global value SHCTX_APPNAME which is a different pointer with the same content. This patch should be backported in every version since ssl support was added to haproxy if we want valgrind to shut up. This is tracked in github issue #56. (cherry picked from commit eb46965bbb21291aab75ae88f033d9c9bab4a785) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 67bd3fce5a6bfe36e95df1f1ba98aca4fcdbe57c Author: Willy Tarreau <w@1wt.eu> Date: Tue Oct 22 18:13:44 2019 +0200 BUG/MEDIUM: http: unbreak redirects in legacy mode As reported in github issue #334, HTTP redirects with no server result in a 503 in legacy mode in 2.0 (and only this version). What happens is that redirects have long relied on a corner case in the connection setup code, which considered that a SHUTW_NOW would be turned into a SHUTW and as a consequence, prevent the transition to ST_ST_REQ. But during 2.0 development, this old assumption appeared to be broken as it would prevent the master-worker CLI from receiving short commands followed by a close : the one-line request was sent with the close and the request was aborted before being turned into a connect for the applet. The code setup sequence was then rearranged to address this, causing new breakage which was then addressed by commits c9aecc8ff2 ("BUG/MEDIUM: stream: Don't request a server connection if a shutw was scheduled") and its fix 5e1a9d715e ("BUG/MEDIUM: stream: Fix the way early aborts on the client side are handled") respectively. But at this point the redirect code relying on SHUTW_NOW alone causes a transition to SI_ST_REQ since it doesn't remove AUTO_CONNECT, and while the redirect respnse is loaded into the response buffer, the 503 wipes everything and aborts. Note that this doesn't happen in HTX mode because HTX redirects make use of channel_abort() on the request, which does remove AUTO_CONNECT. We'd rather not use channel_abort() in legacy as it also closes the read side and will terminate keep-alive. Instead this patch simply adds the required channel_dont_connect() to the redirect code to perform exactly what's missing: prevent the connection from being automatically setup. It doesn't seem that other parts of the code would require anything similar. In fact the redirects and errors are the only cases where a message is inserted into the response channel before connecting to anything (other cases involve an applet). And error files are not subject to this problem because channel_abort() is used, thus AUTO_CONNECT is properly cleared. This commit is solely for 2.0. It doesn't have any equivalent in 2.1 as legacy code was removed. It doesn't need to be backported as 1.9 and earlier are still sensitive to the SHUTW_NOW flag alone. However it should be safe to backport it there if another fix would depend on it. commit 55dc0842fc105eb87c5d1dae68a6c613396e2103 Author: Willy Tarreau <w@1wt.eu> Date: Tue Oct 22 10:04:39 2019 +0200 BUG/MINOR: mux-h2: also make sure blocked legacy connections may expire The backport of commit 2dcdc2236 ("MINOR: mux-h2: add a per-connection list of blocked streams") missed one addition of LIST_ADDQ(blocked_list) for the legacy version. This makes the stream not be counted as blocked and will not let the connection expire in this specific case. This fix is specific to 2.0 and must be backported to 1.9 as well. commit 4fa9857b3dc57703c99982a140df5d8119351262 Author: Tim Duesterhus <tim@bastelstu.be> Date: Wed Oct 16 15:11:15 2019 +0200 BUG/MINOR: sample: Make the `field` converter compatible with `-m found` Previously an expression like: path,field(2,/) -m found always returned `true`. Bug exists since the `field` converter exists. That is: f399b0debfc6c7dc17c6ad503885c911493add56 The fix should be backported to 1.6+. (cherry picked from commit 4381d26edc03faa46401eb0fe82fd7be84be14fd) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit e4876e2b03930a5e280c77f9ebd59f861080a3c5 Author: William Lallemand <wlallemand@haproxy.com> Date: Wed Aug 28 15:22:49 2019 +0200 BUG/MINOR: cache: alloc shctx after check config When running haproxy -c, the cache parser is trying to allocate the size of the cache. This can be a problem in an environment where the RAM is limited. This patch moves the cache allocation in the post_check callback which is not executed during a -c. This patch may be backported at least to 2.0 and 1.9. In 1.9, the callbacks registration mechanism is not the same. So the patch will have to be adapted. No need to backport it to 1.8, the code is probably too different. (cherry picked from commit d1d1e229453a492a538245f6a72ba6929eca9de1) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 33c7e12479cb9bdc2e7e3783fda78a1b2c242363 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon Oct 21 10:53:34 2019 +0200 BUG/MINOR: stick-table: Never exceed (MAX_SESS_STKCTR-1) when fetching a stkctr When a stick counter is fetched, it is important that the requested counter does not exceed (MAX_SESS_STKCTR -1). Actually, there is no bug with a default build because, by construction, MAX_SESS_STKCTR is defined to 3 and we know that we never exceed the max value. scN_* sample fetches are numbered from 0 to 2. For other sample fetches, the value is tested. But there is a bug if MAX_SESS_STKCTR is set to a lower value. For instance 1. In this case the counters sc1_* and sc2_* may be undefined. This patch fixes the issue #330. It must be backported as far as 1.7. (cherry picked from commit a9fa88a1eac9bd0ad2cfb761c4b69fd500a1b056) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 2bbc80ded1bc90dbf406e255917a1aa59c52902c Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon Oct 21 09:55:49 2019 +0200 BUG/MINOR: ssl: Fix fd leak on error path when a TLS ticket keys file is parsed When an error occurred in the function bind_parse_tls_ticket_keys(), during the configuration parsing, the opened file is not always closed. To fix the bug, all errors are catched at the same place, where all ressources are released. This patch fixes the bug #325. It must be backported as far as 1.7. (cherry picked from commit e566f3db11e781572382e9bfff088a26dcdb75c5) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 79eeb2bafdff3fd6a197be99c49202c22ddeca35 Author: William Lallemand <wlallemand@haproxy.com> Date: Fri Oct 18 21:16:39 2019 +0200 BUG/MINOR: mworker/cli: reload fail with inherited FD When using the master CLI with 'fd@', during a reload, the master CLI proxy is stopped. Unfortunately if this is an inherited FD it is closed too, and the master CLI won't be able to bind again during the re-execution. It lead the master to fallback in waitpid mode. This patch forbids the inherited FDs in the master's listeners to be closed during a proxy_stop(). This patch is mandatory to use the -W option in VTest versions that contain the -mcli feature. (https://github.com/vtest/VTest/commit/86e65f1024453b1074d239a88330b5150d3e44bb) Should be backported as far as 1.9. (cherry picked from commit f7f488d8e9740d64cf82b7ef41e55d4f36fe1a43) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit d6de151248603b357565ae52fe92440e66c1177c Author: Emeric Brun <ebrun@haproxy.com> Date: Thu Oct 17 14:53:03 2019 +0200 BUG/MEDIUM: ssl: 'tune.ssl.default-dh-param' value ignored with openssl > 1.1.1 If openssl 1.1.1 is used, c2aae74f0 commit mistakenly enables DH automatic feature from openssl instead of ECDH automatic feature. There is no impact for the ECDH one because the feature is always enabled for that version. But doing this, the 'tune.ssl.default-dh-param' was completely ignored for DH parameters. This patch fix the bug calling 'SSL_CTX_set_ecdh_auto' instead of 'SSL_CTX_set_dh_auto'. Currently some users may use a 2048 DH bits parameter, thinking they're using a 1024 bits one. Doing this, they may experience performance issue on light hardware. This patch warns the user if haproxy fails to configure the given DH parameter. In this case and if openssl version is > 1.1.0, haproxy will let openssl to automatically choose a default DH parameter. For other openssl versions, the DH ciphers won't be usable. A commonly case of failure is due to the security level of openssl.cnf which could refuse a 1024 bits DH parameter for a 2048 bits key: $ cat /etc/ssl/openssl.cnf ... [system_default_sect] MinProtocol = TLSv1 CipherString = DEFAULT@SECLEVEL=2 This should be backport into any branch containing the commit c2aae74f0. It requires all or part of the previous CLEANUP series. This addresses github issue #324. (cherry picked from commit 6624a90a9ac2edb947a8c70fa6a8a283449750c6) Signed-off-by: Emeric Brun <ebrun@haproxy.com> commit df6dd890fd1167446326e99a816b9ba7ac86329f Author: Emeric Brun <ebrun@haproxy.com> Date: Thu Oct 17 16:45:56 2019 +0200 CLEANUP: bind: handle warning label on bind keywords parsing. All bind keyword parsing message were show as alerts. With this patch if the message is flagged only with ERR_WARN and not ERR_ALERT it will show a label [WARNING] and not [ALERT]. (cherry picked from commit 0655c9b22213a0f5716183106d86a995e672d19b) Signed-off-by: Emeric Brun <ebrun@haproxy.com> commit cfc1afe9f21ec27612ed4ad84c4a066c68ca24af Author: Emeric Brun <ebrun@haproxy.com> Date: Thu Oct 17 13:27:40 2019 +0200 CLEANUP: ssl: make ssl_sock_load_dh_params handle errcode/warn ssl_sock_load_dh_params used to return >0 or -1 to indicate success or failure. Make it return a set of ERR_* instead so that its callers can transparently report its status. Given that its callers only used to know about ERR_ALERT | ERR_FATAL, this is the only code returned for now. An error message was added in the case of failure and the comment was updated. (cherry picked from commit 7a88336cf83cd1592fb8e7bc456d72c00c2934e4) Signed-off-by: Emeric Brun <ebrun@haproxy.com> commit 394701dc80ac9d429b12d405973fb30c348b81f3 Author: Emeric Brun <ebrun@haproxy.com> Date: Thu Oct 17 13:25:14 2019 +0200 CLEANUP: ssl: make ssl_sock_put_ckch_into_ctx handle errcode/warn ssl_sock_put_ckch_into_ctx used to return 0 or >0 to indicate success or failure. Make it return a set of ERR_* instead so that its callers can transparently report its status. Given that its callers only used to know about ERR_ALERT | ERR_FATAL, this is the only code returned for now. And a comment was updated. (cherry picked from commit a96b582d0eaf1a7a9b21c71b8eda2965f74699d4) Signed-off-by: Emeric Brun <ebrun@haproxy.com> commit b131c870f9fbb5553d8970bb039609f97e1cc6e6 Author: Willy Tarreau <w@1wt.eu> Date: Wed Oct 16 16:42:19 2019 +0200 CLEANUP: ssl: make ssl_sock_load_cert*() return real error codes These functions were returning only 0 or 1 to mention success or error, and made it impossible to return a warning. Let's make them return error codes from ERR_* and map all errors to ERR_ALERT|ERR_FATAL for now since this is the only code that was set on non-zero return value. In addition some missing comments were added or adjusted around the functions' return values. (cherry picked from commit bbc91965bf4bc7e08c5a9b93fdfa28a64c0949d3) [EBR: also include a part of 054563de1] Signed-off-by: Emeric Brun <ebrun@haproxy.com> commit 1405aa503a87c1d05d3043886fd3a03b1ce5f8c7 Author: William Lallemand <wlallemand@haproxy.com> Date: Tue Oct 1 17:53:58 2019 +0200 REGTEST: mcli/mcli_show_info: launch a 'show info' on the master CLI This test launches a HAProxy process in master worker with 'nbproc 4'. It sends a "show info" to the process 3 and verify that the right process replied. This regtest depends on the support of the master CLI for VTest. (cherry picked from commit cd4827746940fb0d55b5e7d027747d98bb2c5f8a) Signed-off-by: Willy Tarreau <w@1wt.eu> commit aafb6cc6563bd3b0eaefd02b42c7ad844e3d867e Author: Olivier Houchard <cognet@ci0.org> Date: Fri Oct 18 14:18:29 2019 +0200 BUG/MEDIUM: mux_pt: Only call the wake emthod if nobody subscribed to receive. In mux_pt_io_cb(), instead of always calling the wake method, only do so if nobody subscribed for receive. If we have a subscription, just wake the associated tasklet up. This should be backported to 1.9 and 2.0. (cherry picked from commit 2ed389dc6e27257997f83e3f22cb6bf8898a2a5a) Signed-off-by: Willy Tarreau <w@1wt.eu> commit a5115f2a4cb6ff11198dc8a5c598b3d75562f751 Author: Olivier Houchard <cognet@ci0.org> Date: Fri Oct 18 13:56:40 2019 +0200 BUG/MEDIUM: mux_pt: Don't destroy the connection if we have a stream attached. There's a small window where the mux_pt tasklet may be woken up, and thus mux_pt_io_cb() get scheduled, and then the connection is attached to a new stream. If this happen, don't do anything, and just let the stream know by calling its wake method. If the connection had an error, the stream should take care of destroying it by calling the detach method. This should be backported to 2.0 and 1.9. (cherry picked from commit ea510fc5e7cf8ead040253869160b0d2266ce65f) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 6d206de892aba6449bc63f3c74e784d5f45722c9 Author: Olivier Houchard <cognet@ci0.org> Date: Fri Oct 18 10:59:30 2019 +0200 Revert e8826ded5fea3593d89da2be5c2d81c522070995. This reverts commit "BUG/MEDIUM: mux_pt: Make sure we don't have a conn_stream before freeing.". mux_pt_io_cb() is only used if we have no associated stream, so we will never have a cs, so there's no need to check that, and we of course have to destroy the mux in mux_pt_detach() if we have no associated session, or if there's an error on the connection. This should be backported to 2.0 and 1.9. (cherry picked from commit 9dce2c53a8e49d43b501c3025d41705d302b1df1) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 5fa7e736c3c35819fed7cfb4ddb4609a7d352b3b Author: Willy Tarreau <w@1wt.eu> Date: Fri Oct 18 08:50:49 2019 +0200 BUG/MAJOR: idle conns: schedule the cleanup task on the correct threads The idle cleanup tasks' masks are wrong for threads 32 to 64, which causes the wrong thread to wake up and clean the connections that it does not own, with a risk of crash or infinite loop depending on concurrent accesses. For thread 32, any thread between 32 and 64 will be woken up, but for threads 33 to 64, in fact threads 1 to 32 will run the task instead. This issue only affects deployments enabling more than 32 threads. While is it not common in 1.9 where this has to be explicit, and can easily be dealt with by lowering the number of threads, it can be more common in 2.0 since by default the thread count is determined based on the number of available processors, hence the MAJOR tag which is mostly relevant to 2.x. The problem was first introduced into 1.9-dev9 by commit 0c18a6fe3 ("MEDIUM: servers: Add a way to keep idle connections alive.") and was later moved to cfgparse.c by commit 980855bd9 ("BUG/MEDIUM: server: initialize the orphaned conns lists and tasks at the end"). This patch needs to be backported as far as 1.9, with care as 1.9 is slightly different there (uses idle_task[] instead of idle_conn_cleanup[] like in 2.x). (cherry picked from commit bbb5f1d6d2a9948409683aa5865c130801d193ad) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 5cc02aa4d9ae13f1b6833dcb5cd1c30d7f9d524d Author: Olivier Houchard <ohouchard@haproxy.com> Date: Thu Oct 17 18:02:53 2019 +0200 BUG/MEDIUM: mux_pt: Make sure we don't have a conn_stream before freeing. On error, make sure we don't have a conn_stream before freeing the connection and the associated mux context. Otherwise a stream will still reference the connection, and attempt to use it. If we still have a conn_stream, it will properly be free'd when the detach method is called, anyway. This should be backported to 2.0 and 1.9. (cherry picked from commit e8826ded5fea3593d89da2be5c2d81c522070995) Signed-off-by: Willy Tarreau <w@1wt.eu> commit 297df1860c6d09c7edde1dd6b0bd4f9758600ff3 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Thu Oct 17 14:40:48 2019 +0200 BUG/MINOR: tcp: Don't alter counters returned by tcp info fetchers There are 2 kinds of tcp info fetchers. Those returning a time value (fc_rtt and fc_rttval) and those returning a counter (fc_unacked, fc_sacked, fc_retrans, fc_fackets, fc_lost, fc_reordering). Because of a bug, the counters were handled as time values, and by default, were divided by 1000 (because of an invalid conversion from us to ms). To work around this bug and have the right value, the argument "us" had to be specified. So now, tcp info fetchers returning a counter don't support any argument anymore. To not break old configurations, if an argument is provided, it is ignored and a warning is emitted during the configuration parsing. In addition, parameter validiation is now performed during the configuration parsing. This patch must be backported as far as 1.7. (cherry picked from commit ba0c53ef71cd7d2b344de318742d0ef239fd34e4) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 1b7c4dc3fc509a40debbf4ffa6342f56c7046e83 Author: William Lallemand <wlallemand@haproxy.com> Date: Tue Oct 15 14:04:08 2019 +0200 BUG/MINOR: mworker/ssl: close openssl FDs unconditionally Patch 56996da ("BUG/MINOR: mworker/ssl: close OpenSSL FDs on reload") fixes a issue where the /dev/random FD was leaked by OpenSSL upon a reload in master worker mode. Indeed the FD was not flagged with CLOEXEC. The fix was checking if ssl_used_frontend or ssl_used_backend were set to close the FD. This is wrong, indeed the lua init code creates an SSL server without increasing the backend value, so the deinit is never done when you don't use SSL in your configuration. To reproduce the problem you just need to build haproxy with openssl and lua with an openssl which does not use the getrandom() syscall. No openssl nor lua configuration are required for haproxy. This patch must be backported as far as 1.8. Fix issue #314. (cherry picked from commit 5fdb5b36e1e0bef9b8a79c3550bd7a8751bac396) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 046a2e27886fd52f969c04582aab4931c34a48c3 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Wed Oct 16 09:09:04 2019 +0200 BUG/MINOR: http-htx: Properly set htx flags on error files to support keep-alive When an error file was loaded, the flag HTX_SL_F_XFER_LEN was never set on the HTX start line because of a bug. During the headers parsing, the flag H1_MF_XFER_LEN is never set on the h1m. But it was the condition to set HTX_SL_F_XFER_LEN on the HTX start-line. Instead, we must only rely on the flags H1_MF_CLEN or H1_MF_CHNK. Because of this bug, it was impossible to keep a connection alive for a response generated by HAProxy. Now the flag HTX_SL_F_XFER_LEN is set when an error file have a content length (chunked responses are unsupported at this stage) and the connection may be kept alive if there is no connection header specified to explicitly close it. This patch must be backported to 2.0 and 1.9. (cherry picked from commit 0d4ce93fcf9bd1f350c95f5a1bbe403bce57c680) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 4cb1637b72975c9d9e819b03d01453b860340686 Author: Willy Tarreau <w@1wt.eu> Date: Wed Oct 16 09:44:55 2019 +0200 MINOR: version: make the version strings variables, not constants It currently is not possible to figure the exact haproxy version from a core file for the sole reason that the version is stored into a const string and as such ends up in the .text section that is not part of a core file. By turning them into variables we move them to the data section and they appear in core files. In order to help finding them, we just prepend an extra variable in front of them and we're able to immediately spot the version strings from a core file: $ strings core | fgrep -A2 'HAProxy version' HAProxy version follows 2.1-dev2-e0f48a-88 2019/10/15 (These are haproxy_version and haproxy_date respectively). This may be backported to 2.0 since this part is not support to impact anything but the developer's time spent debugging. (cherry picked from commit abefa34c344b7aa2c38654664c2dd170d50e3b2e) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 5e1c1468789d80213325054b6fc1dbd1c70d7776 Author: Miroslav Zagorac <mzagorac@haproxy.com> Date: Mon Oct 14 17:15:56 2019 +0200 BUG/MINOR: WURFL: fix send_log() function arguments If the user agent data contains text that has special characters that are used to format the output from the vfprintf() function, haproxy crashes. String "%s %s %s" may be used as an example. % curl -A "%s %s %s" localhost:10080/index.html curl: (52) Empty reply from server haproxy log: 00000000:WURFL-test.clireq[00c7:ffffffff]: GET /index.html HTTP/1.1 00000000:WURFL-test.clihdr[00c7:ffffffff]: host: localhost:10080 00000000:WURFL-test.clihdr[00c7:ffffffff]: user-agent: %s %s %s 00000000:WURFL-test.clihdr[00c7:ffffffff]: accept: */* segmentation fault (core dumped) gdb 'where' output: #0 strlen () at ../sysdeps/x86_64/strlen.S:106 #1 0x00007f7c014a8da8 in _IO_vfprintf_internal (s=s@entry=0x7ffc808fe750, format=<optimized out>, format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n", ap=ap@entry=0x7ffc808fe8b8) at vfprintf.c:1637 #2 0x00007f7c014cfe89 in _IO_vsnprintf ( string=0x55cb772c34e0 "WURFL: retrieve header request returns [(null) %s %s %s B,w\313U", maxlen=<optimized out>, format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n", args=args@entry=0x7ffc808fe8b8) at vsnprintf.c:114 #3 0x000055cb758f898f in send_log (p=p@entry=0x0, level=level@entry=5, format=format@entry=0x7ffc808fe9c0 "WURFL: retrieve header request returns [%s %s %s]\n") at src/log.c:1477 #4 0x000055cb75845e0b in ha_wurfl_log ( message=message@entry=0x55cb75989460 "WURFL: retrieve header request returns [%s]\n") at src/wurfl.c:47 #5 0x000055cb7584614a in ha_wurfl_retrieve_header (header_name=<optimized out>, wh=0x7ffc808fec70) at src/wurfl.c:763 In case WURFL (actually HAProxy) is not compiled with debug option enabled (-DWURFL_DEBUG), this bug does not come to light. This patch could be backported in every version supporting the ScientiaMobile's WURFL. (as far as 1.7) (cherry picked from commit f0eb3739ac5460016455cd606d856e7bd2b142fb) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit b4bad50d04cd3156d6ba11a7bcf60def96b70b2c Author: Christopher Faulet <cfaulet@haproxy.com> Date: Fri Oct 11 14:22:00 2019 +0200 BUG/MINOR: mux-h1: Capture ignored parsing errors When the option "accept-invalid-http-request" is enabled, some parsing errors are ignored. But the position of the error is reported. In legacy HTTP mode, such errors were captured. So, we now do the same in the H1 multiplexer. If required, this patch may be backported to 2.0 and 1.9. (cherry picked from commit 486498c630a0678446808107d02f94c48fc6722a) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 85fc6ef50dd0d2404c0c43a5671c949371f36ee9 Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon Oct 14 14:17:00 2019 +0200 BUG/MINOR: mux-h1: Mark the output buffer as full when the xfer is interrupted When an outgoing HTX message is formatted to a raw message, if we fail to copy data of an HTX block into the output buffer, we mark it as full. Before it was only done calling the function buf_room_for_htx_data(). But this function is designed to optimize input processing. This patch must be backported to 2.0 and 1.9. (cherry picked from commit a61aa544b4b95d1416fe5684ca2d3a0e110e9743) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit da0889df9f33c2a8585e95c95db0f81a80dcc40c Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon Oct 14 11:29:48 2019 +0200 BUG/MINOR: chunk: Fix tests on the chunk size in functions copying data When raw data are copied or appended in a chunk, the result must not exceed the chunk size but it can reach it. Unlike functions to copy or append a string, there is no terminating null byte. This patch must be backported as far as 1.8. Note in 1.8, the functions chunk_cpy() and chunk_cat() don't exist. (cherry picked from commit 48fa033f2809af265c230a7c7cf86413b7f9909b) Signed-off-by: Christopher Faulet <cfaulet@haproxy.com> commit 3cc7647b31a7ec4373ac5023fff93da41ece117b Author: Christopher Faulet <cfaulet@haproxy.com> Date: Mon Oct 14 14:36:51 2019 +0200 BUG/MEDIUM: htx: Catch chunk_memcat() failures when HTX data are formatted to h1 In functions htx_*_to_h1(), most of time several calls to chunk_memcat() are cha…
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Have a Plan to support “coap”?
The text was updated successfully, but these errors were encountered: