Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rate Limiting policy #760

Closed
4 tasks done
mikz opened this issue Jun 12, 2018 · 14 comments
Closed
4 tasks done

Rate Limiting policy #760

mikz opened this issue Jun 12, 2018 · 14 comments
Assignees
Milestone

Comments

@mikz
Copy link
Contributor

mikz commented Jun 12, 2018

Rate limiting policy was discussed and implemented in: #648, #703, #713, #839

This meta issue is meant to gather all requirements for making rate limiting policy really useful in the next release.

  • Liquid templating the keys
  • Use JWT object for rate limiting
  • Apply rate limit to just a subset of requests
  • Allow dynamic reloading of configuration

Liquid templating the keys

Defined by #713 and implemented by #719

Use JWT object for rate limiting

Defined by #713, implemented by #718

Allow dynamic reloading of configuration

Discussed in #667, being implemented in #703 and #758.

Apply rate limit to just a subset of requests

The use case is: applying different rate limit to POST and GET requests.

First option is using #744 to apply different rate limiting policies to different endpoints.

The alternative is we allow setting the policy limit by Liquid too. That could allow following definitions:

{% if request == 'GET' %}50{% else %}10{% endif %}

However, this approach can be very error prone and not really intuitive.

Another alternative would be to define "values" for the rate limiting keys. For example the rate limit key {{ request.method }} could have several values: POST: 50, GET: 10.
This can be error prone too and easy to miss some values.

Implemented in #839

@mikz mikz added this to the 3.3 milestone Jun 12, 2018
@y-tabata
Copy link
Contributor

Regarding "Apply rate limit to just a subset of requests", since those who want to add conditions to the policy maybe want to add a lot of conditions to control very detailed case, I think we should reduce the overhead of adding conditions.

Regarding "Conditional policy", in my understanding, the more conditions increase, the more policy chains increase. The overhead is big, I think.

Regarding "Match phase", I don't understand this exactly, we can use this like nginx phase, right? If it is easy and intuitive to define conditions, we can apply this one.

Regarding alternative approaches, I think it can be error prone, too. If defining conditions, we should add another property dedicated to conditions.

@mikz
Copy link
Contributor Author

mikz commented Jun 27, 2018

@y-tabata thank you!

Match phase

Yes. It would be new phase line the existing ones. That policy:match(context) would return true or false to signal it should be executed or not. All these phases would be executed before rewrite and policies that return false or nil would be skipped (all phases).

Apply are limit to a subset of requests

Could you try to make some representable example of what kind of and how many rules are we talking about here? Would be really good to have examples, so we can see the conditions and make sure they are covered.

Then we could evaluate different approaches and see how it would look like.
Maybe using both "match phase" or "conditional policy" are wrong approaches for what we want to do here.
It is possible that rate limiting policy is quite special and will require condition for every rate limit instead of one condition per policy (and using several polices to make different conditions).

Even if we agree that is the required approach, it should reuse code with the policy conditional (that we need for other policies anyway).

We are researching/defining how proper rate limiting only in the gateway could look like: https://github.com/3scale/ostia/issues/17#issuecomment-396913897
If you have any scenarios to contribute, we could try to make sure they are covered.

Conditional policy

Yes, there could be overhead, but really depends on what kind of conditions you will have.
You can imagine conditional policy as a group of policies that get executed or not.

It creates duplication only when you want to have the same set of policies executed under different conditions. But that could be solved with "or". We really need some examples to see where would be the possible overhead.

@y-tabata
Copy link
Contributor

One of the most important feature of the rate limit in the APIcast is this achieves the strict rate limit.
So one of the main objective to use this is to protect API backends, I think.

Regarding this point of view, usecases are as follows.

  • Using with the Upstream policy, configure a rate limit for each endpoint.
    backend A: 100 rps, backend B: 50 rps.

  • Reduce the number of requests if there is a resource/http_method which load is high.
    GET: 100 rps, POST: 50rps or /user_name: 100 rps, /account_detail: 50 rps.

  • If needed, control the number of requests according to the grade of user/client.
    special: 100rps, normal: 50 rps.

Combinating the above usecases, an example is as follows.

backend resource grade limit
backend A * * 100
backend A * normal 20
backend A POST /account_detail * 10
backend A POST /account_detail normal 2
backend B * * 50
backend B * normal 10
backend B POST /account_detail * 5
backend B POST /account_detail normal 1

@mikz
Copy link
Contributor Author

mikz commented Jul 9, 2018

@y-tabata I'm not sure if I got this right. I don't see how you got to numbers like 1 or 2 in the limit.

Do you want to rate limit the upstream and each endpoint regardless of a client?
Just considering the first rule of backend A: 100 rps, backend B: 50 rps.

Lets say Backend A is an API:

GET /account_detail
POST /account_detail

And Backend B an API:

GET /user_detail
POST /user_detail

If there are two users, how many requests they can make to the Backend A/B ? 100 in total, or 100 each, or 100 each per endpoint?

If User 1 makes 70 requests to GET /account_detail, then User 2 can make just 30 ? To GET /account_detail? Or to whole Backend A ? But both can still make 100 to Backend B.

I think we should try to split this into two parts: the rate limit definition and what is the effective rate limit per request.

For example the rate limit definition would be:

  1. {{ upstream.host }} with 100 rps for "backend-a" and 50 rps for "backend-b"
  2. {{ user.id }}{{ user.grade }} with 100 rps for user.grate == "special" and 50 rps for user.grate == "normal"
  3. {{ req.method }}_{{ req.path }} with 50 rps for "POST_/account_detail"

Using just rule 1. any amount of users making requests could make just 100 rps to backend-a and 50 rps to backend-b.

Just using rule 2. each user would get own rate limit. So each user could make 100 or 50 rps regardless any backend or endpoint.

And just using rule 3. all requests combined could do just 50 rps to POST /account_detail, everything else unlimited.

Now combining those three would create following limit:

User 1 with grade "normal" could make 50 rps, but only if it meets other limits too. So if someone else (User 2, special) makes 70 requests to backend-a, then this user could make only 30 requests to backend-a even though he could make 50 in total. Next second User 1 can make 50 requests to backend-a (when User 2, special haven't made any requests) using max of his rate limit of 50 rps. Some other users could make the remaining 50 rps of backend-a rate limit.

I'm thinking about rate limits as being applied independently and user either can pass all of them or one of them stops him.

@y-tabata
Copy link
Contributor

Do you want to rate limit the upstream and each endpoint regardless of a client?

Yes, but the third usecase is related to user/client.

If there are two users, how many requests they can make to the Backend A/B ? 100 in total, or 100 each, or 100 each per endpoint?
If User 1 makes 70 requests to GET /account_detail, then User 2 can make just 30 ? To GET /account_detail? Or to whole Backend A ? But both can still make 100 to Backend B.

For example, if User 1's grade is "normal" and User 2's grade is "special":
For the Backend A,

  • User 1's requests + User 2's requests <= 100 rps.
  • User 1's requests <= 20 rps.

However, especially for the API POST /account_detail of the Backend A,

  • User 1's requests + User 2's requests <= 10 rps.
  • User 1's requests <= 2 rps.

Concretely, after User 1 makes 1 requests to POST /account_detail and User 2 makes 7 requests to POST /account_detail,

  • User 1 can make 1 ( = 2 - 1 ) request to POST /account_detail or 19 ( = 20 - 1 ) requests to GET /account_detail.
  • User 2 can make 2 ( = 10 - 8 ) requests to POST /account_detail or 92 ( = 100 - 8 ) requests to GET /account_detail.

@mikz
Copy link
Contributor Author

mikz commented Jul 10, 2018

@y-tabata thanks! I think I understand it better now. Will try to incorporate that into our design proposal.

@y-tabata
Copy link
Contributor

We can close this by the Conditional Policy, right?

@mikz
Copy link
Contributor Author

mikz commented Jul 23, 2018

@y-tabata I don't think this is done yet. You'd have to create instance of this for each condition you have. And the rate limit policy will try to apply all limits or none only in one instance of itself. If you'd split rate limits into separate policies it would commit only part of them in case some rule is over limit.
Lets say you have limit A, B and C. If user would be at the limit C and each of them would be in different policy it would increment A and B, then try to increment C and roll it back because it is over limit. But it would never roll back A or B. That happens only when all limits are in one policy.
Thats why we have the outstanding item in the TODO list.

I'm wondering if what you need to do could be done by templating the limit value by liquid.

Limiting upstream backend a/b:

key: "{{ upstream.host }}
limit: |
  {%- case upstream.host -%}
  {%- when 'backend_a' -%} 100
  {%- when 'backend_a' -%} 50
  {%- endcase -%}

Limiting user grade

key: "user_{{ jwt.uid }}_{{ jwt.grade }}"
limit: |
  {%- case jwt.grade -%}
  {%- when 'special' -%} 100
  {%- else -%} 50
  {%- endcase -%}

Limiting endpoint:

key: "endpoint_{{ upstream.host }}_{{ request.method }}_{{ request.path }}"
limit: |
  {%- assign endpoint = "{{ request.method }} {{ request.path }}"
  {% - case upstream.host -%}
  {%- when 'backend_a' -%}
    {%- case endpoint -%}
    {%- when 'POST /account_details' -%} 10
    {%- endcase -%}
  {%- when 'backend_b' -%}
    {%- case endpoint -%}
    {%- when 'POST /account_details' -%} 5
    {%- endcase -%}
  {% endcase %}

User grade normal on backend a/b

key: "user_{{ jwt.uid }}_upstream_{{ upstream.host }}"
limit: |
  {%- case jwt.grade -%}
  {%- when 'normal' -%}
    {% - case upstream.host -%}
    {%- when 'backend_a' -%} 20
    {%- when 'backend_b' -%} 10
    {% endcase %}
  {% endcase %}

And so on. Some of the conditions could be better expressed as conditions rather than values of the limit I guess. But still it might offer some flexibility. For example you would not have to specify two rate limits for backend a / b but just one and template the value. The same for grade normal/special. For special cases like user id + user grade + upstream + endpoint it might be better to express it as both condition and templating the value.

@davidor
Copy link
Contributor

davidor commented Jul 23, 2018

@mikz I wanted to expand a bit on the alternative that we have here and that you mentioned in the last paragraph of your comment. We could add support for the kind of conditions that we use in the conditional policy. So for example, your first limit could be defined like this:

"key": "upstream_A", // or {{ host }}
"condition": "{{ host == "upstream_A" }}",
"count": 100, "window": 60

"key": "upstream_B",  // or {{ host }}
"condition": "{{ host == "upstream_B" }}",
"count": 50, "window": 60

and the second:

"key": "user_{{ jwt.uid }}",
"condition": "{{ jwt.grade == "special" }}",
"count": 100, "window": 60

"key": "user_{{ jwt.uid }}",
"condition": "{{ jwt.grade != "special" }}",
"count": 50, "window": 60

It might be more explicit and be less error prone in some cases. The problem with this approach is duplication. For cases with many different params and possible values for each, the number of limits to be defined grows quickly. For example, with something like your last example: user_{{ jwt.grade }}_upstream_{{ upstream.host }}. If we had 5 possible grades and 5 hosts, we'd need to define 25 different limits instead of just 1 with a template and a large case.

@y-tabata
Copy link
Contributor

@mikz @davidor thanks!
I understand we cannot close this yet.

@y-tabata
Copy link
Contributor

@mikz how about we change the lua-resty-traffic and the rate-limit policy to commit after the access phase for example the post_action phase?

@mikz
Copy link
Contributor Author

mikz commented Jul 31, 2018

@y-tabata interesting idea! Then during the window of the request (lets say that takes 1 second) you could perform more requests than set in the rate limit.

Lets say the rate limit value is 1.
First request comes in, checks the current value (0) and increments in memory (1). Lets the request through.
Second request comes in, checks the current value (0) and increments in memory (1). Lets the request through.
First request finishes. Value is incremented to the store (1).
Second request finishes, Value is incremented to the store (2).

If we do that, then it should be opt-in as it makes rate limits quite less consistent.

We could try to make this in the content phase before the request is passed upstream, but still any other policy that introduces significant delay (like talking to external service) and is evaluated before calling commit would increase this window of inconsistency.

@y-tabata
Copy link
Contributor

y-tabata commented Aug 1, 2018

@mikz thanks!

Considering the consistency, I think the @davidor 's idea is good.

Regarding my idea, to be more consistent, we can cache the number of uncommitted requests, however I think the overhead is big.

How about that committing in the access phase, and uncommitting and returning errors in the content phase?
Of course there is a case to through less requests than set in the rate limit, I think this is better than the before idea.
It may be good to be able to select the before idea and this idea with opt-in.

@y-tabata
Copy link
Contributor

y-tabata commented Aug 2, 2018

How about that committing in the access phase, and uncommitting and returning errors in the content phase?
Of course there is a case to through less requests than set in the rate limit, I think this is better than the before idea.

Maybe this is not good because commits remain when other policy fails in the access phase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants