Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added concurrent application launch prevention #54

Merged
merged 2 commits into from
Nov 1, 2018

Conversation

mdrose
Copy link
Contributor

@mdrose mdrose commented Oct 22, 2018

Resolves #44

Uses Unix domain sockets to see if the client is already running. If it's already running, the new process displays an error dialog and exits upon user acknowledgement.

Copy link
Contributor

@heartsucker heartsucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heya, so this seems like it would allow for races. My initial thoughts was to use the sdc_home directory to store a lockfile using a fcntl exclusive lock. I know this isn't a particularly Qt way of doing it, but since that happens at the kernel level, it would prevent races. The dialogue box could still be used in the same way, however.

Also it seems that your way would prevent parallel testing (not necessarily bad, but just a thought). My main reason for wanting a lock was not to have exactly one Qt application running ever, but to prevent a user from click the icon 50 times and opening way too many copies of the app. (Yes, also read/write/flag races).

Something like this:

import fcntl
from contextlib import contextmanager

@contextmanager
def acquire_lock(sdc_home: str) -> None:
    with open(path.join(sdc_home, '.app-lock'), 'w') as f:
        try:
            fcntl.flock(f, fcntl.LOCK_EX)
            yield True
        except (OSError, IOError):
            error('Failed to acquire lock. Are the tests already running?')
            yield False

I also generally worry about socket listeners in a high security environment.

Def open to hearing why your method would work better or if I'm missing something here.

Copy link
Contributor

@heartsucker heartsucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heya, so this seems like it would allow for races. My initial thoughts was to use the sdc_home directory to store a lockfile using a fcntl exclusive lock. I know this isn't a particularly Qt way of doing it, but since that happens at the kernel level, it would prevent races. The dialogue box could still be used in the same way, however.

Also it seems that your way would prevent parallel testing (not necessarily bad, but just a thought). My main reason for wanting a lock was not to have exactly one Qt application running ever, but to prevent a user from click the icon 50 times and opening way too many copies of the app. (Yes, also read/write/flag races).

Something like this:

import fcntl
from contextlib import contextmanager

@contextmanager
def acquire_lock(sdc_home: str) -> None:
    with open(path.join(sdc_home, '.app-lock'), 'w') as f:
        try:
            fcntl.flock(f, fcntl.LOCK_EX)
            yield True
        except (OSError, IOError):
            yield False

I also generally worry about socket listeners in a high security environment.

Def open to hearing why your method would work better or if I'm missing something here.

@mdrose
Copy link
Contributor Author

mdrose commented Oct 22, 2018

@heartsucker Usual advantage would be portability, but since this is intended to run only on Qubes, that doesn't apply.

Another possibility is just checking to see if a name is bound in the abstract namespace. If it's already bound, the kernel throws an error. Pretty simple and straightforward. Avoids any filesystem involvement, like you would have with file locks.

I've pushed a commit to this PR with that implementation.

@mdrose mdrose force-pushed the prevent_second_instance branch from 8da0620 to 41bd41c Compare October 22, 2018 16:29
@heartsucker
Copy link
Contributor

I prefer this implementation to your original since it seems to be more race-free than the other. I cannot, however, comment on the security implications of it.

I also do suggest we replace the identifier with the sdc_home variable that gets injected (see #43). This is really the exact scenario we want to prevent, and doing that would allow for parallel testing of the app in that we could bind to '\0' + APP + sdc_home in order to get many instance up.

@mdrose
Copy link
Contributor Author

mdrose commented Oct 23, 2018

@heartsucker Not quite sure where your security concern lies. It literally just asks the kernel if another process has already taken a unique name.

Anyway, looking at #43, it looks like it would be best for that to be resolved first.

@heartsucker
Copy link
Contributor

No actual concerns. Just stating that I can't quickly analyze it because I'm unfamiliar with it.

@mdrose mdrose force-pushed the prevent_second_instance branch 2 times, most recently from 11da3cf to ee204ed Compare October 25, 2018 07:05
@mdrose mdrose force-pushed the prevent_second_instance branch from ee204ed to 07a9cec Compare October 25, 2018 07:09
@mdrose
Copy link
Contributor Author

mdrose commented Oct 25, 2018

@heartsucker updated

Copy link
Contributor

@heartsucker heartsucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two nits, but otherwise works as advertised. :D

tests/test_app.py Outdated Show resolved Hide resolved
tests/test_app.py Outdated Show resolved Hide resolved
@mdrose mdrose force-pushed the prevent_second_instance branch 2 times, most recently from dd08978 to dd6efed Compare October 26, 2018 20:03
@mdrose mdrose force-pushed the prevent_second_instance branch from dd6efed to c228dae Compare October 26, 2018 20:06
@eloquence eloquence added this to the 0.1.0alpha milestone Oct 31, 2018
@emkll
Copy link
Contributor

emkll commented Oct 31, 2018

I might be a bit late to this conversation, my understanding that the classic pattern is to use a lock file on the filesystem as @heartsucker suggests, since it’s easy to debug/delete if something goes wrong. I see that the socket is not explicitly closed, will it be reliably released? even if the application does not exit cleanly?

Just chiming in regarding the possible security implications raised, I am assuming the concern was arbitrary data sent to the socket bind command. Given that it’s a transliteration from the Unix implementation (https://docs.python.org/2/library/socket.html) I believe it should not have any further implications, especially given the the address is prefixed with the application name. Also, given that a potential attacker would need to control/modify this config file, I think in this case it shouldn’t present any further risk.

@mdrose
Copy link
Contributor Author

mdrose commented Nov 1, 2018

@emkll If the process ends, the kernel releases the binding, regardless of how it ends, same as file locks. File locking is more common, since it's been around a lot longer, but also because the abstract namespace only exists in the Linux kernel, so you wouldn't use it if you want portability (BSD etc). Main advantage over file locking is it's immune to file system state. Honestly, I don't see there being a significant difference either way, though, in this particular situation.

@heartsucker heartsucker merged commit f875dc8 into freedomofpress:master Nov 1, 2018
@heartsucker
Copy link
Contributor

Thanks :D :D

@mdrose mdrose deleted the prevent_second_instance branch December 15, 2018 08:04
legoktm pushed a commit that referenced this pull request Dec 11, 2023
legoktm pushed a commit that referenced this pull request Dec 11, 2023
Update description in README to mirror current implementation
legoktm pushed a commit that referenced this pull request Dec 15, 2023
parse filename when posting reply
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants