Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to fix race condition? #23

Open
max-block opened this issue Oct 18, 2018 · 5 comments
Open

Is it possible to fix race condition? #23

max-block opened this issue Oct 18, 2018 · 5 comments

Comments

@max-block
Copy link

I use this library in my jest tests. Jest runs in parallel ~ 20 test files which use it. For such a use case the chance of race condition is not tiny, I think in my case it's 30%.

Is it possible to fix race condition?

@sindresorhus
Copy link
Owner

For that specific case, we could use a lock to only allow one execution of get-port at the time.

@0xch4z
Copy link

0xch4z commented May 29, 2019

I was going to solve this with a mutex, but it felt wrong. Honestly it seems like it should be the consumers responsibility to synchronize getting a port and using it.

Even if you make access to resolving a port mutually exclusive, after it resolves the consumer will likely do something async with the port. Which means there is an opportunity for getPort to resolve the same port again and boom, another race condition.

Perhaps more than anything an update to the readme explaining this caveat (and potential workarounds) would be helpful?

For instance, I would expect something like this to solve @max-block's problem.

import http from 'http';

import test from 'ava';
import getPort from 'get-port';
import superagent from 'superagent';
import { Mutex } from 'async-mutex';

const lock = new Mutex();

const createTestServerClient = async () => {
	const release = await lock.acquire();
  	try {
		const port = await getPort();
		http.createServer(handler).listen(port);
		return superagent(`http://127.0.0.1:${port}`)
  	} finally {
    	        release();
  	}
}

test('test #1', t => {
	const client = await createTestServerClient();
	// ...
});

test('test #n...', t => {
	const client = await createTestServerClient();
	// ...
})

@chriskuehl
Copy link

What about using the same trick that ephemeral-port-reserve uses? https://github.com/yelp/ephemeral-port-reserve

Specifically, force the chosen port into TIME_WAIT state so that it won't get given out to other processes for the kernel-configured time period (60 seconds by default) but is still available to be bound to explicitly.

This should eliminate the race condition, assuming a couple (probably) reasonable assumptions hold:

  • Any other processes on the box which are picking random ports are binding to port 0 to get a random port (probably typical)
  • The port is used within the 60 second window

@sanji-programmer
Copy link

sanji-programmer commented Jul 8, 2019

I also could confirm the issue; using a limited range of ports makes it more likely:
https://github.com/sanji-programmer/get-port-race-example/blob/master/race.js

Yet, it is still pretty rare with this code, but could reproduce the race with a tool I've worked with to test different schedules of the event loop. The race may occur with as few as 2 concurrent accesses.

As discussed, the race raises from the value of variable port defined in callback server.listen (line 8 of index.js) and potential races of other calls.

I can also confirm that the solution given by @Charliekenney23 works (https://github.com/sanji-programmer/get-port-race-example/blob/master/fix-race-client.js).

I'm not sure we can use the trick of @chriskuehl in Node, but we can use the idea. For instance, getPort could have a attribute 'retentionTime' assuring that a returned port will not be returned again for some time. I think we can implement this without any lock by keeping a list modified in callback server.listen.

What do you think? If there is interest, I can sketch something on this direction.

@davidmarkclements
Copy link
Contributor

davidmarkclements commented Nov 25, 2019

I've noticed repeated ports in-process under circumstances where get-port is called a lot and ports are bound and unbound frequently. In this scenario the port returned from get-port actually can't be bound to immediately. So there's an in-process race condition as well, I've addressed it in #36 using a sort of GC sweeping approach to the ports - this is more efficient that one interval per port, or iterating TTL metadata and it can also be quite trivially adapted to an external data source for solving this issue as well. I didn't do that because I know @sindresorhus likes to rigorously curate his dependency trees so I think there probably needs to be a discussion about what external data source to use.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants