Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make message rate configurable, including burst mode #908

Closed
wants to merge 1 commit into from

Conversation

larsks
Copy link
Contributor

@larsks larsks commented Oct 5, 2015

This adds several knobs that control how fast Sopel will send multiple
messages to the same recipient:

  • bucket_burst_tokens -- this controls how many messages may be sent
    in "burst mode", i.e., without any inter-message delay.
  • bucket_refill_rate -- once exhausted, burst tokens are refilled at a
    rate of bucket_refill_rate tokens/second.
  • bucket_empty_wait -- how long to wait between sending messages when
    not in burst mode.

This permits a bot to return a few lines of information quickly, while
still engaging rate limiting to prevent flood protection from kicking
in.

How it works:

When sending to a new recipient, we initialize a token counter to
bucket_burst_tokens. Every time we send a message, we decrement this
by 1. If the token counter reaches 0, we engage the rate limiting
behavior (which is identical to the existing code, except that the
minimum wait of 0.7 seconds is now configurable).

When sending a new message to an existing recipient, we check if the
token counter is exhausted. If it is, we refill it based on the elapsed
time since the last message and the value of bucket_refill_rate, and
then proceed as described above.

This adds several knobs that control how fast Sopel will send multiple
messages to the same recipient:

- `bucket_burst_tokens` -- this controls how many messages may be sent
  in "burst mode", i.e., without any inter-message delay.
- `bucket_refill_rate` -- once exhausted, burst tokens are refilled at a
  rate of `bucket_refill_rate` tokens/second.
- `bucket_empty_wait` -- how long to wait between sending messages when
  not in burst mode.

This permis a bit to return a few lines of information quickly, while
still engaging rate limiting to prevent flood protection from kicking
in.

How it works:

When sending to a new recipient, we initialize a token counter to
`bucket_burst_tokens`.  Every time we send a message, we decrement this
by 1.  If the token counter reaches 0, we engage the rate limiting
behavior (which is identical to the existing code, except that the
minimum wait of 0.7 seconds is now configurable).

When sending a new message to an existing recipient, we check if the
token counter is exhausted.  If it is, we refill it based on the elapsed
time since the last message and the value of `bucket_refill_rate`, and
then proceed as described above.
@maxpowa
Copy link
Contributor

maxpowa commented Oct 14, 2015

Using this change on my test bot -- works very well, have yet to encounter any issues.

@thomastanck
Copy link
Contributor

If you're still active, could you resolve the new merge conflicts that have popped up? +1

@larsks
Copy link
Contributor Author

larsks commented Feb 4, 2016

I'll take a look. It looks as if in this case "merge conflict" means "all the code my patch touched has been relocated".

@embolalia
Copy link
Contributor

I'm really hesitant to touch the rate limiting code without doing some serious testing on it first. This is getting discussed in #952. It looks like this is a replacement of the existing PRIVMSG limit system following that model, which is good. When I get the chance, I'd like to run some stress tests with it on a bunch of networks to see how it holds up in practice.

I'm definitely okay with merging this system for just PRIVMSG now, and then figuring out how to pull it in for everything else later. However, I don't really like having configs for these internals. There's no way for a user to know what to put for them, whether they've set it right, and whether their configuration will lead to unexpected and confusing errors later on. Moreover, I think it's highly unlikely that (assuming the stress tests work out with the defaults), anyone would ever really change these (let alone to something that wouldn't be quietly broken).

@larsks
Copy link
Contributor Author

larsks commented Nov 16, 2016

I'm closing this old PR. Cheers!

@larsks larsks closed this Nov 16, 2016
@dgw
Copy link
Member

dgw commented Jun 7, 2018

@larsks Would you like to continue working on this, or should someone else take over? This commit (f58fa9f) could be @deathbybandaid's starting point for #1342, if you're done with it. (Anything built on this will include the existing commit & authorship info, I'll make sure of it. 😸)

If you're not done with it, feel free to reopen. I took over maintaining Sopel a few months ago, so if you closed this because it stalled, it can be un-stalled now.

@dgw
Copy link
Member

dgw commented Mar 23, 2019

I'm going to reopen this just so we don't lose track of it, since closed PRs basically disappear into a black hole unless you know exactly what you're looking for.

If someone wants to replace/extend it, we'll label this appropriately and close it again at that time.

@dgw dgw reopened this Mar 23, 2019
@dgw dgw added this to the 7.0.0 milestone Mar 24, 2019
@dgw dgw removed the Tweak label Mar 24, 2019
dgw added a commit to kwaaak/sopel that referenced this pull request Mar 25, 2019
Original change description by @larsks (from sopel-irc#908, slightly edited):

This adds several knobs that control how fast the bot sends multiple
messages to the same recipient:

- bucket_burst_tokens -- this controls how many messages may be sent in
  "burst mode", i.e., without any inter-message delay.
- bucket_refill_rate -- once exhausted, burst tokens are refilled at a
  rate of bucket_refill_rate tokens/second.
- bucket_empty_wait -- how long to wait between sending messages when
  not in burst mode.

This permits the bot to return a few lines of information quickly and
not trigger flood protection.

How it works:

When sending to a new recipient, we initialize a token counter to
bucket_burst_tokens. Every time we send a message, we decrement this by
1. If the token counter reaches 0, we engage the rate limiting behavior
(which is identical to the existing code, except that the minimum wait
of 0.7 seconds is now configurable).

When sending a new message to an existing recipient, we check if the
token counter is exhausted. If it is, we refill it based on the elapsed
time since the last message and the value of bucket_refill_rate, and
then proceed as described above.

----

Adapted to current Sopel code and rebased by these users, respectively:

Co-authored-by: kwaaak <[email protected]>
Co-authored-by: dgw <[email protected]>
@deathbybandaid
Copy link
Contributor

I think this one could be closed since #1518 takes care of this.

@dgw dgw added the Replaced Superseded by a newer PR label Apr 3, 2019
@dgw dgw removed this from the 7.0.0 milestone Apr 3, 2019
@dgw
Copy link
Member

dgw commented Apr 3, 2019

Definitely. I just forgot to close it and add the "Replaced" label.

@dgw dgw closed this Apr 3, 2019
kwaaak added a commit to kwaaak/sopel that referenced this pull request May 13, 2019
Original change description by @larsks (from sopel-irc#908, slightly edited):

This adds several knobs that control how fast the bot sends multiple
messages to the same recipient:

- bucket_burst_tokens -- this controls how many messages may be sent in
  "burst mode", i.e., without any inter-message delay.
- bucket_refill_rate -- once exhausted, burst tokens are refilled at a
  rate of bucket_refill_rate tokens/second.
- bucket_empty_wait -- how long to wait between sending messages when
  not in burst mode.

This permits the bot to return a few lines of information quickly and
not trigger flood protection.

How it works:

When sending to a new recipient, we initialize a token counter to
bucket_burst_tokens. Every time we send a message, we decrement this by
1. If the token counter reaches 0, we engage the rate limiting behavior
(which is identical to the existing code, except that the minimum wait
of 0.7 seconds is now configurable).

When sending a new message to an existing recipient, we check if the
token counter is exhausted. If it is, we refill it based on the elapsed
time since the last message and the value of bucket_refill_rate, and
then proceed as described above.

----

Adapted to current Sopel code and rebased by these users, respectively:

Co-authored-by: kwaaak <[email protected]>
Co-authored-by: dgw <[email protected]>
kwaaak added a commit to kwaaak/sopel that referenced this pull request May 13, 2019
Original change description by @larsks (from sopel-irc#908, slightly edited):

This adds several knobs that control how fast the bot sends multiple
messages to the same recipient:

- bucket_burst_tokens -- this controls how many messages may be sent in
  "burst mode", i.e., without any inter-message delay.
- bucket_refill_rate -- once exhausted, burst tokens are refilled at a
  rate of bucket_refill_rate tokens/second.
- bucket_empty_wait -- how long to wait between sending messages when
  not in burst mode.

This permits the bot to return a few lines of information quickly and
not trigger flood protection.

How it works:

When sending to a new recipient, we initialize a token counter to
bucket_burst_tokens. Every time we send a message, we decrement this by
1. If the token counter reaches 0, we engage the rate limiting behavior
(which is identical to the existing code, except that the minimum wait
of 0.7 seconds is now configurable).

When sending a new message to an existing recipient, we check if the
token counter is exhausted. If it is, we refill it based on the elapsed
time since the last message and the value of bucket_refill_rate, and
then proceed as described above.

----

Adapted to current Sopel code and rebased by these users, respectively:

Co-authored-by: kwaaak <[email protected]>
Co-authored-by: dgw <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature Replaced Superseded by a newer PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants