You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When an upstream sends a 100-continue header, which is handled by HCM, this causes has_continue_header_ to be set for the shared stream state, 100-continue encode to be performed to the downstream and continue_headers_continued_ to be set for the encoder stream filter state. When a previously paused decode filter chain is resumed, there is some common logic (shared by encode/decode active stream filters) that attempts to issue do100ContinueHeaders() on the decode path.
I have a reproducer that can trip this with the grpc-web filter; you need some filter that will allow headers to be sent to the backend, do a stop iteration in decodeData() and then resume at some point later. I imagine this impacts a bunch of 3rd party filters, Lua/Wasm filters etc. as well. I initially failed to get this to work with the buffer filter, since it buffers both headers/data.
This is not considered a security vulnerability, since the crash requires either an untrusted upstream (outside of the threat model) or a very specific combination of filter chain configuration, HCM config and timing. This has been confirmed by the OSS Envoy security team as being fixable in this repo.
The 100-continue state tracking variables were checking in
commonContinue() (on both decode/encode paths), conditioning
do100ContinueHeaders(). This makes no sense on the decode path, and can
lead to crashes as per envoyproxy#10923 when the decode pipeline is resumed, so
refactored the logic out to just the encode path.
Risk level: Low
Testing: Unit and integration regression tests added, as well as corpus
entry.
Fixes oss-fuzz issue
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=18461Fixesenvoyproxy#10923
Signed-off-by: Harvey Tuch <htuch@google.com>
The 100-continue state tracking variables were checking in
commonContinue() (on both decode/encode paths), conditioning
do100ContinueHeaders(). This makes no sense on the decode path, and can
lead to crashes as per #10923 when the decode pipeline is resumed, so
refactored the logic out to just the encode path.
Risk level: Low
Testing: Unit and integration regression tests added, as well as corpus
entry.
Fixes oss-fuzz issue
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=18461Fixes#10923
Signed-off-by: Harvey Tuch <htuch@google.com>
When an upstream sends a 100-continue header, which is handled by HCM, this causes
has_continue_header_
to be set for the shared stream state, 100-continue encode to be performed to the downstream andcontinue_headers_continued_
to be set for the encoder stream filter state. When a previously paused decode filter chain is resumed, there is some common logic (shared by encode/decode active stream filters) that attempts to issuedo100ContinueHeaders()
on the decode path.I have a reproducer that can trip this with the grpc-web filter; you need some filter that will allow headers to be sent to the backend, do a stop iteration in
decodeData()
and then resume at some point later. I imagine this impacts a bunch of 3rd party filters, Lua/Wasm filters etc. as well. I initially failed to get this to work with the buffer filter, since it buffers both headers/data.This is not considered a security vulnerability, since the crash requires either an untrusted upstream (outside of the threat model) or a very specific combination of filter chain configuration, HCM config and timing. This has been confirmed by the OSS Envoy security team as being fixable in this repo.
This was discovered by the fuzz report https://oss-fuzz.com/testcase-detail/5674283828772864, where the
do100ContinueHeaders()
method inActiveStreamDecoderFilter
has itsNOT_REACHED
panic tripped.The text was updated successfully, but these errors were encountered: