Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explicit unlimited ratelimits #261

Merged
20 changes: 20 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -18,6 +18,7 @@
- [Example 2](#example-2)
- [Example 3](#example-3)
- [Example 4](#example-4)
- [Example 5](#example-5)
- [Loading Configuration](#loading-configuration)
- [Log Format](#log-format)
- [Request Fields](#request-fields)
@@ -329,6 +330,25 @@ descriptors:
unit: second
```

#### Example 5

We can also define unlimited rate limit descriptors:

```yaml
domain: internal
descriptors:
- key: ldap
rate_limit:
unlimited: true

- key: azure
rate_limit:
unit: minute
requests_per_unit: 100
```

For an unlimited descriptor, the request will not be sent to the underlying cache (Redis/Memcached), but will be quickly returned locally by the ratelimit instance.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you mention why someone would want to do this? I assume for stats, etc.?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, stats yes. But also, in our case, our client will not allow a request to go through if a descriptor is not found, so we do have to have it defined.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add something to the readme

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added


## Loading Configuration

The Ratelimit service uses a library written by Lyft called [goruntime](https://github.com/lyft/goruntime) to do configuration loading. Goruntime monitors
6 changes: 6 additions & 0 deletions examples/ratelimit/config/example.yaml
Original file line number Diff line number Diff line change
@@ -27,3 +27,9 @@ descriptors:
rate_limit:
unit: second
requests_per_unit: 1
- key: bay
rate_limit:
unlimited: true
- key: qux
rate_limit:
unlimited: true
7 changes: 4 additions & 3 deletions src/config/config.go
Original file line number Diff line number Diff line change
@@ -16,9 +16,10 @@ func (e RateLimitConfigError) Error() string {

// Wrapper for an individual rate limit config entry which includes the defined limit and stats.
type RateLimit struct {
FullKey string
Stats stats.RateLimitStats
Limit *pb.RateLimitResponse_RateLimit
FullKey string
Stats stats.RateLimitStats
Limit *pb.RateLimitResponse_RateLimit
Unlimited bool
}

// Interface for interacting with a loaded rate limit config.
22 changes: 15 additions & 7 deletions src/config/config_impl.go
Original file line number Diff line number Diff line change
@@ -15,6 +15,7 @@ import (
type yamlRateLimit struct {
RequestsPerUnit uint32 `yaml:"requests_per_unit"`
Unit string
Unlimited bool `yaml:"unlimited"`
}

type yamlDescriptor struct {
@@ -51,17 +52,19 @@ var validKeys = map[string]bool{
"rate_limit": true,
"unit": true,
"requests_per_unit": true,
"unlimited": true,
}

// Create a new rate limit config entry.
// @param requestsPerUnit supplies the requests per unit of time for the entry.
// @param unit supplies the unit of time for the entry.
// @param rlStats supplies the stats structure associated with the RateLimit
// @param unlimited supplies whether the rate limit is unlimited
// @return the new config entry.
func NewRateLimit(
requestsPerUnit uint32, unit pb.RateLimitResponse_RateLimit_Unit, rlStats stats.RateLimitStats) *RateLimit {
requestsPerUnit uint32, unit pb.RateLimitResponse_RateLimit_Unit, rlStats stats.RateLimitStats, unlimited bool) *RateLimit {

return &RateLimit{FullKey: rlStats.GetKey(), Stats: rlStats, Limit: &pb.RateLimitResponse_RateLimit{RequestsPerUnit: requestsPerUnit, Unit: unit}}
return &RateLimit{FullKey: rlStats.GetKey(), Stats: rlStats, Limit: &pb.RateLimitResponse_RateLimit{RequestsPerUnit: requestsPerUnit, Unit: unit}, Unlimited: unlimited}
}

// Dump an individual descriptor for debugging purposes.
@@ -112,19 +115,21 @@ func (this *rateLimitDescriptor) loadDescriptors(config RateLimitConfigToLoad, p
var rateLimit *RateLimit = nil
var rateLimitDebugString string = ""
if descriptorConfig.RateLimit != nil {
unlimited := descriptorConfig.RateLimit.Unlimited

value, present :=
pb.RateLimitResponse_RateLimit_Unit_value[strings.ToUpper(descriptorConfig.RateLimit.Unit)]
if !present || value == int32(pb.RateLimitResponse_RateLimit_UNKNOWN) {
if (!present || value == int32(pb.RateLimitResponse_RateLimit_UNKNOWN)) && !unlimited {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you split this logic out so that we actually check if unlimited is set with a unit at all and have an error message (and test) for that? I think that would be more clear to the user.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Ping on this)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't too sure what exactly you meant, so please review this again :)

panic(newRateLimitConfigError(
config,
fmt.Sprintf("invalid rate limit unit '%s'", descriptorConfig.RateLimit.Unit)))
}

rateLimit = NewRateLimit(
descriptorConfig.RateLimit.RequestsPerUnit, pb.RateLimitResponse_RateLimit_Unit(value), statsManager.NewStats(newParentKey))
descriptorConfig.RateLimit.RequestsPerUnit, pb.RateLimitResponse_RateLimit_Unit(value), statsManager.NewStats(newParentKey), unlimited)
rateLimitDebugString = fmt.Sprintf(
" ratelimit={requests_per_unit=%d, unit=%s}", rateLimit.Limit.RequestsPerUnit,
rateLimit.Limit.Unit.String())
" ratelimit={requests_per_unit=%d, unit=%s, unlimited=%t}", rateLimit.Limit.RequestsPerUnit,
rateLimit.Limit.Unit.String(), rateLimit.Unlimited)
}

logger.Debugf(
@@ -167,6 +172,8 @@ func validateYamlKeys(config RateLimitConfigToLoad, config_map map[interface{}]i
case string:
// int is a leaf type in ratelimit config. No need to keep validating.
case int:
// bool is a leaf type in ratelimit config. No need to keep validating.
case bool:
// nil case is an incorrectly formed yaml. However, because this function's purpose is to validate
// the yaml's keys we don't panic here.
case nil:
@@ -240,7 +247,8 @@ func (this *rateLimitConfigImpl) GetLimit(
rateLimit = NewRateLimit(
descriptor.GetLimit().GetRequestsPerUnit(),
rateLimitOverrideUnit,
this.statsManager.NewStats(rateLimitKey))
this.statsManager.NewStats(rateLimitKey),
false)
return rateLimit
}

7 changes: 6 additions & 1 deletion src/limiter/base_limiter.go
Original file line number Diff line number Diff line change
@@ -70,7 +70,7 @@ func (this *BaseRateLimiter) IsOverLimitWithLocalCache(key string) bool {
// Generates response descriptor status based on cache key, over the limit with local cache, over the limit and
// near the limit thresholds. Thresholds are checked in order and are mutually exclusive.
func (this *BaseRateLimiter) GetResponseDescriptorStatus(key string, limitInfo *LimitInfo,
isOverLimitWithLocalCache bool, hitsAddend uint32) *pb.RateLimitResponse_DescriptorStatus {
isOverLimitWithLocalCache bool, hitsAddend uint32, isUnlimited bool) *pb.RateLimitResponse_DescriptorStatus {
if key == "" {
return this.generateResponseDescriptorStatus(pb.RateLimitResponse_OK,
nil, 0)
@@ -81,6 +81,11 @@ func (this *BaseRateLimiter) GetResponseDescriptorStatus(key string, limitInfo *
return this.generateResponseDescriptorStatus(pb.RateLimitResponse_OVER_LIMIT,
limitInfo.limit.Limit, 0)
}
if isUnlimited {
limitInfo.limit.Stats.WithinLimit.Add(uint64(hitsAddend))
return this.generateResponseDescriptorStatus(pb.RateLimitResponse_OK,
limitInfo.limit.Limit, limitInfo.limit.Limit.RequestsPerUnit)
}
var responseDescriptorStatus *pb.RateLimitResponse_DescriptorStatus
limitInfo.overLimitThreshold = limitInfo.limit.Limit.RequestsPerUnit
// The nearLimitThreshold is the number of requests that can be made before hitting the nearLimitRatio.
10 changes: 8 additions & 2 deletions src/memcached/cache_impl.go
Original file line number Diff line number Diff line change
@@ -83,6 +83,12 @@ func (this *rateLimitMemcacheImpl) DoLimit(
continue
}

// Check if the key is unlimited
lmajercak-wish marked this conversation as resolved.
Show resolved Hide resolved
if limits[i].Unlimited {
logger.Debugf("cache key is within the limit: %s", cacheKey.Key)
continue
}

logger.Debugf("looking up cache key: %s", cacheKey.Key)
keysToGet = append(keysToGet, cacheKey.Key)
}
@@ -120,7 +126,7 @@ func (this *rateLimitMemcacheImpl) DoLimit(
limitInfo := limiter.NewRateLimitInfo(limits[i], limitBeforeIncrease, limitAfterIncrease, 0, 0)

responseDescriptorStatuses[i] = this.baseRateLimiter.GetResponseDescriptorStatus(cacheKey.Key,
limitInfo, isOverLimitWithLocalCache[i], hitsAddend)
limitInfo, isOverLimitWithLocalCache[i], hitsAddend, limits[i] != nil && limits[i].Unlimited)
}

this.waitGroup.Add(1)
@@ -136,7 +142,7 @@ func (this *rateLimitMemcacheImpl) increaseAsync(cacheKeys []limiter.CacheKey, i
limits []*config.RateLimit, hitsAddend uint64) {
defer this.waitGroup.Done()
for i, cacheKey := range cacheKeys {
if cacheKey.Key == "" || isOverLimitWithLocalCache[i] {
if cacheKey.Key == "" || isOverLimitWithLocalCache[i] || limits[i].Unlimited {
lmajercak-wish marked this conversation as resolved.
Show resolved Hide resolved
continue
}

8 changes: 7 additions & 1 deletion src/redis/fixed_cache_impl.go
Original file line number Diff line number Diff line change
@@ -58,6 +58,12 @@ func (this *fixedRateLimitCacheImpl) DoLimit(
continue
}

// Check if the key is unlimited
if limits[i].Unlimited {
logger.Debugf("cache key is within the limit: %s", cacheKey.Key)
continue
}

logger.Debugf("looking up cache key: %s", cacheKey.Key)

expirationSeconds := utils.UnitToDivider(limits[i].Limit.Unit)
@@ -97,7 +103,7 @@ func (this *fixedRateLimitCacheImpl) DoLimit(
limitInfo := limiter.NewRateLimitInfo(limits[i], limitBeforeIncrease, limitAfterIncrease, 0, 0)

responseDescriptorStatuses[i] = this.baseRateLimiter.GetResponseDescriptorStatus(cacheKey.Key,
limitInfo, isOverLimitWithLocalCache[i], hitsAddend)
limitInfo, isOverLimitWithLocalCache[i], hitsAddend, limits[i] != nil && limits[i].Unlimited)

}

4 changes: 4 additions & 0 deletions test/config/basic_config.yaml
Original file line number Diff line number Diff line change
@@ -56,3 +56,7 @@ descriptors:
rate_limit:
unit: day
requests_per_unit: 25

- key: key6
rate_limit:
unlimited: true
11 changes: 11 additions & 0 deletions test/config/config_test.go
Original file line number Diff line number Diff line change
@@ -164,6 +164,17 @@ func TestBasicConfig(t *testing.T) {
assert.EqualValues(1, stats.NewCounter("test-domain.key4.over_limit").Value())
assert.EqualValues(1, stats.NewCounter("test-domain.key4.near_limit").Value())
assert.EqualValues(1, stats.NewCounter("test-domain.key4.within_limit").Value())

rl = rlConfig.GetLimit(
nil, "test-domain",
&pb_struct.RateLimitDescriptor{
Entries: []*pb_struct.RateLimitDescriptor_Entry{{Key: "key6", Value: "foo"}},
})
rl.Stats.TotalHits.Inc()
rl.Stats.WithinLimit.Inc()
assert.True(rl.Unlimited)
assert.EqualValues(1, stats.NewCounter("test-domain.key6.total_hits").Value())
assert.EqualValues(1, stats.NewCounter("test-domain.key6.within_limit").Value())
}

func TestConfigLimitOverride(t *testing.T) {
Loading