-
Notifications
You must be signed in to change notification settings - Fork 16
Conversation
Instead of processing requests directly in `tide` request handlers, the request handler simply adds the request to a queue, and the requests are dequeued and processed by a fixed number of worker threads. This solves two problems: * Requests taking a long time to process do not time out the HTTP request * We limit the CPU usage by limiting the number of worker threads, so the server does not get overloaded. Closes #1151
This fixes a race condition where we could be shut down in the middle of transferring several grants to a key. After restarting, previously, we would start granting to this key from scratch, potentially granting more than we intended and even, if we get shut down frequently, not making progress at all. Note, there is still a small race condition: if we get shut down just after building a transfer but before incrementing the count of transfers to that key (a very small window of time) we may repeat the grant the next time we are restarted. This is a minor problem, as it means at worst tranfserring an extra grant per restart. To fix this, we need to synchronize between the faucet's storage and the wallet library storage, which I think requires some kind of composabilities between AtomicStores.
Note that we aren't currently getting parallelism out of having multiple workers, since each worker currently locks the global wallet exclusively while it is transferring. So for now it doesn't make much sense to configure |
if let Some(val) = val { | ||
// This is the most recent value for `key`, and it is an insert, which means | ||
// `key` is in the queue. Go ahead and add it to the index and the message | ||
// channel. | ||
index.insert(key.clone(), Some(val)); | ||
} else { | ||
// This is the most recent value for `key`, and it is a delete, which means | ||
// `key` is not in the queue. Remember this information in `index`. | ||
index.insert(key.clone(), None); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is the same as index.insert(key.clone(), val);
, so we could also write it in the more compact version without the if
statement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, good point. There used to be some extra logic in one of the branches, but I removed it
/// form `UserPubKey -> Option<usize>`. An entry `key -> Some(n)` corresponds to updating the | ||
/// counter associated with `key` to `n`. An entry `key -> None` corresponds to deleting the entry | ||
/// for `key`. We can recover the in-memory index by simply replaying each log entry and inserting | ||
/// or deleting into a `HashMap` as indicated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It wasn't too hard to understand the code thanks to the explanations here but I wonder if it would make the code more clear by itself if instead of using Some(0), Some(n > 0), None
to represent the state of a faucet request we had an enum for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, good suggestion
Weird. I get 2 faucet test binaries so the new test runs twice. Also seems to happen on the CI. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks correct to me. I did also try it with cape-demo-local
and the https://github.com/EspressoSystems/cape-ui/pull/529 and that worked fine.
I got some error when running the tests, but it might not be related to the change of this PR.
|
@philippecamacho that appears unrelated but I've also never seen that error before. Can you checkout |
For me all tests pass. |
One thing I noticed is that sometimes the faucet process from the test does not seem to terminate after the tests are over. |
I'm getting the same error on |
@philippecamacho @sveitser this failing test is because the relayer in this test uses port 60000, and I already had a process using port 60000...basically I think this test was sending relayer requests to the CAPE wallet API |
Instead of processing requests directly in
tide
request handlers,the request handler simply adds the request to a queue, and the
requests are dequeued and processed by a fixed number of worker
threads.
This solves two problems:
request
so the server does not get overloaded.
Closes #1151