Skip to content

Commit

Permalink
memcache: pull Flush() up to general RateLimitCache interface
Browse files Browse the repository at this point in the history
memcache: Add documentation to README

Signed-off-by: David Weitzman <dweitzman@pinterest.com>
  • Loading branch information
dweitzman committed Sep 18, 2020
1 parent 0a76b0b commit f8f7de4
Show file tree
Hide file tree
Showing 5 changed files with 34 additions and 15 deletions.
13 changes: 13 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
- [Pipelining](#pipelining)
- [One Redis Instance](#one-redis-instance)
- [Two Redis Instances](#two-redis-instances)
- [Memcache](#memcache)
- [Contact](#contact)

<!-- END doctoc generated TOC please keep comment here to allow auto update -->
Expand Down Expand Up @@ -484,6 +485,18 @@ To configure two Redis instances use the following environment variables:
This setup will use the Redis server configured with the `_PERSECOND_` vars for
per second limits, and the other Redis server for all other limits.

# Memcache

Experimental Memcache support has been added as an alternative to Redis in v1.5.

To configure a Memcache instance use the following environment variables instead of the Redis variables:

1. `MEMCACHE_HOST_PORT=<host:port>`
1. `BACKEND_TYPE=memcache`

With memcache mode increments will happen asynchronously, so it's technically possible for
a client to exceed quota briefly if multiple requests happen at exactly the same time.

# Contact

* [envoy-announce](https://groups.google.com/forum/#!forum/envoy-announce): Low frequency mailing
Expand Down
4 changes: 4 additions & 0 deletions src/limiter/cache.go
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,8 @@ type RateLimitCache interface {
ctx context.Context,
request *pb.RateLimitRequest,
limits []*config.RateLimit) []*pb.RateLimitResponse_DescriptorStatus

// Waits for any unfinished asynchronous work. This may be used by unit tests,
// since the memcache cache does increments in a background gorountine.
Flush()
}
17 changes: 2 additions & 15 deletions src/memcached/cache_impl.go
Original file line number Diff line number Diff line change
Expand Up @@ -34,19 +34,6 @@ import (
"github.com/envoyproxy/ratelimit/src/settings"
)

type rateLimitCache interface {
// Same as in limiter.RateLimitCache
DoLimit(
ctx context.Context,
request *pb.RateLimitRequest,
limits []*config.RateLimit) []*pb.RateLimitResponse_DescriptorStatus

// Waits for any lingering goroutines that are incrementing memcache values.
// This is used for unit tests, since the memcache increments happen
// asynchronously in the background.
Flush()
}

type rateLimitMemcacheImpl struct {
client Client
timeSource limiter.TimeSource
Expand Down Expand Up @@ -277,7 +264,7 @@ func (this *rateLimitMemcacheImpl) Flush() {
this.wg.Wait()
}

func NewRateLimitCacheImpl(client Client, timeSource limiter.TimeSource, jitterRand *rand.Rand, expirationJitterMaxSeconds int64, localCache *freecache.Cache, scope stats.Scope) rateLimitCache {
func NewRateLimitCacheImpl(client Client, timeSource limiter.TimeSource, jitterRand *rand.Rand, expirationJitterMaxSeconds int64, localCache *freecache.Cache, scope stats.Scope) limiter.RateLimitCache {
return &rateLimitMemcacheImpl{
client: client,
timeSource: timeSource,
Expand All @@ -288,7 +275,7 @@ func NewRateLimitCacheImpl(client Client, timeSource limiter.TimeSource, jitterR
}
}

func NewRateLimitCacheImplFromSettings(s settings.Settings, timeSource limiter.TimeSource, jitterRand *rand.Rand, localCache *freecache.Cache, scope stats.Scope) rateLimitCache {
func NewRateLimitCacheImplFromSettings(s settings.Settings, timeSource limiter.TimeSource, jitterRand *rand.Rand, localCache *freecache.Cache, scope stats.Scope) limiter.RateLimitCache {
return NewRateLimitCacheImpl(
memcache.New(s.MemcacheHostPort),
timeSource,
Expand Down
3 changes: 3 additions & 0 deletions src/redis/cache_impl.go
Original file line number Diff line number Diff line change
Expand Up @@ -210,6 +210,9 @@ func (this *rateLimitCacheImpl) DoLimit(
return responseDescriptorStatuses
}

// Flush() is a no-op with redis since quota reads and updates happen synchronously.
func (this *rateLimitCacheImpl) Flush() {}

func NewRateLimitCacheImpl(client Client, perSecondClient Client, timeSource limiter.TimeSource, jitterRand *rand.Rand, expirationJitterMaxSeconds int64, localCache *freecache.Cache) limiter.RateLimitCache {
return &rateLimitCacheImpl{
client: client,
Expand Down
12 changes: 12 additions & 0 deletions test/mocks/limiter/limiter.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

0 comments on commit f8f7de4

Please sign in to comment.