-
Notifications
You must be signed in to change notification settings - Fork 465
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce short living tasks produced by memcached implementation and move them to goroutines pool #251
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense, thanks!
Oops looks like a real CI issue, please take a look. /wait |
@mattklein123 The question is: can we consider bumping the target version to the recent 1.16? If not, I'll change the implementation to work on Go 1.14 |
I would be fine with bumping the Go version, please go for it! /wait |
@mattklein123 I bumped Go version to 1.16, please take a look |
I think there might be another fix required due to the Go update. Sorry. :( /wait |
Signed-off-by: bstorozhuk <storozhuk.b.m@gmail.com>
Signed-off-by: bstorozhuk <storozhuk.b.m@gmail.com>
Signed-off-by: bstorozhuk <storozhuk.b.m@gmail.com>
Signed-off-by: bstorozhuk <storozhuk.b.m@gmail.com>
Signed-off-by: bstorozhuk <storozhuk.b.m@gmail.com>
…yproxy#246) The underlying memcache client library allows this to be configured, and currently defaults to a value of 2, see: https://github.com/bradfitz/gomemcache/blob/master/memcache/memcache.go#L72 https://github.com/bradfitz/gomemcache/blob/master/memcache/memcache.go#L145 https://github.com/bradfitz/gomemcache/blob/master/memcache/memcache.go#L239 This allows this value to be configured by a new environmet variable: MEMCACHE_MAX_IDLE_CONNS which defaults to -1 meaning the default from the library will apply (which is the current behaviour). Signed-off-by: Peter Marsh <pete.d.marsh@gmail.com> Signed-off-by: bstorozhuk <storozhuk.b.m@gmail.com>
Signed-off-by: devincd <505259926@qq.com> Signed-off-by: bstorozhuk <storozhuk.b.m@gmail.com>
Signed-off-by: Peter Marsh <pete.d.marsh@gmail.com> Signed-off-by: bstorozhuk <storozhuk.b.m@gmail.com>
…oyproxy#252) - servers listen addresses are configurable via environment variable - matches port configurability providing *HOST environment variables Fixes envoyproxy#245 Signed-off-by: Sunjay Bhatia <sunjayb@vmware.com> Signed-off-by: bstorozhuk <storozhuk.b.m@gmail.com>
User deferred barrier.signal() so panic definitely occurs before we continue on in test. Config reload uses recover() and increments config load counter, tests were failing to see config load error counter increment. Fixes: envoyproxy#256 Signed-off-by: Sunjay Bhatia <sunjayb@vmware.com> Signed-off-by: bstorozhuk <storozhuk.b.m@gmail.com>
This allows MEMCAHE_SRV to be specified as an SRV record from which multiple memcache hosts can be resolved. For example: MEMCACHE_SRV=_memcache._tcp.mylovelydomain.com This can be used instead of MEMCACHE_HOST_PORT. This will then be resolved and whatever set of servers it represents will be used as the set of memcache servers to connect to. At this stage neither priority or weight is supported, though weight could be fairly straightforwardly in future. The SRV can be polled periodically for new servers by setting the following env var (with 0 meaning "never check"): MEMCACHE_SRV_REFRESH=600s # supports standard go durations Signed-off-by: Peter Marsh <pete.d.marsh@gmail.com> Signed-off-by: bstorozhuk <storozhuk.b.m@gmail.com>
@mattklein123 So x509 (c *Certificate) VerifyHostname(h string) now fails on legacy certificates, but even if we fix the certificate in our tests, this will continue failing on some certificates of other This behavior can be temporarily disabled by GODEBUG=x509ignoreCN=0 ENV variable, but all users of ratelimiter will need to pass this variable while, they are updating their certificates. I think such a change is definitely beyond the scope of this ticket, so I moved back Go 1.14 and changed the goroutine pool to work on Go 1.14 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
…ve them to goroutines pool (envoyproxy#251) Signed-off-by: bstorozhuk <storozhuk.b.m@gmail.com>
…ve them to goroutines pool (envoyproxy#251) Signed-off-by: bstorozhuk <storozhuk.b.m@gmail.com>
During investigation on #250 I made a quick look at pprof results didn't show any noticeable hot spots in the
github.com/envoyproxy/ratelimit
orgithub.com/bradfitz/gomemcache
but a lot of samples were in runtime.schedule, runtime.findrunnable, runtime.futex. So I decided to reduce the potential load on the scheduler by moving the continuous spawn of new short-lived goroutines to a pool that can automatically expand to a necessary size, stabilize and stop creating new goroutines if the load remains stable.
This simple change reduced overall CPU consumption on 32 cores machine by 20% in the same setup that is described in the issue #250