-
-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it possible to fix race condition? #23
Comments
For that specific case, we could use a lock to only allow one execution of |
I was going to solve this with a mutex, but it felt wrong. Honestly it seems like it should be the consumers responsibility to synchronize getting a port and using it. Even if you make access to resolving a port mutually exclusive, after it resolves the consumer will likely do something async with the port. Which means there is an opportunity for Perhaps more than anything an update to the readme explaining this caveat (and potential workarounds) would be helpful? For instance, I would expect something like this to solve @max-block's problem. import http from 'http';
import test from 'ava';
import getPort from 'get-port';
import superagent from 'superagent';
import { Mutex } from 'async-mutex';
const lock = new Mutex();
const createTestServerClient = async () => {
const release = await lock.acquire();
try {
const port = await getPort();
http.createServer(handler).listen(port);
return superagent(`http://127.0.0.1:${port}`)
} finally {
release();
}
}
test('test #1', t => {
const client = await createTestServerClient();
// ...
});
test('test #n...', t => {
const client = await createTestServerClient();
// ...
}) |
What about using the same trick that ephemeral-port-reserve uses? https://github.com/yelp/ephemeral-port-reserve Specifically, force the chosen port into This should eliminate the race condition, assuming a couple (probably) reasonable assumptions hold:
|
I also could confirm the issue; using a limited range of ports makes it more likely: Yet, it is still pretty rare with this code, but could reproduce the race with a tool I've worked with to test different schedules of the event loop. The race may occur with as few as 2 concurrent accesses. As discussed, the race raises from the value of variable port defined in callback server.listen (line 8 of index.js) and potential races of other calls. I can also confirm that the solution given by @Charliekenney23 works (https://github.com/sanji-programmer/get-port-race-example/blob/master/fix-race-client.js). I'm not sure we can use the trick of @chriskuehl in Node, but we can use the idea. For instance, getPort could have a attribute 'retentionTime' assuring that a returned port will not be returned again for some time. I think we can implement this without any lock by keeping a list modified in callback server.listen. What do you think? If there is interest, I can sketch something on this direction. |
I've noticed repeated ports in-process under circumstances where get-port is called a lot and ports are bound and unbound frequently. In this scenario the port returned from get-port actually can't be bound to immediately. So there's an in-process race condition as well, I've addressed it in #36 using a sort of GC sweeping approach to the ports - this is more efficient that one interval per port, or iterating TTL metadata and it can also be quite trivially adapted to an external data source for solving this issue as well. I didn't do that because I know @sindresorhus likes to rigorously curate his dependency trees so I think there probably needs to be a discussion about what external data source to use. |
I use this library in my jest tests. Jest runs in parallel ~ 20 test files which use it. For such a use case the chance of race condition is not tiny, I think in my case it's 30%.
Is it possible to fix race condition?
The text was updated successfully, but these errors were encountered: