Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sharing sockets between multiprocess workers with Socket Manager #25

Closed
wants to merge 23 commits into from

Conversation

naritta
Copy link
Contributor

@naritta naritta commented Sep 30, 2015

This is a PR for implementation of this issue.
Sharing sockets between multiprocess workers #24

Details is here.
http://qiita.com/ritta/items/d1ca662cdd04c54dba2c

@frsyuki
Copy link
Member

frsyuki commented Sep 30, 2015

First of all, this is great achievement. Good job!
Before going into details, could you let me know how can users use SocketManager? In other words, what's the API for applications?

module ServerEngine
module SocketManager

def self.new_socket_manager
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like this method is not complete yet.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fixed it now.
It looks easy but it was a little confusing.

Firstly I thought I I have to use OptionParser not using ARGV[-1].
But if I use OptionParser here, in fluentd side it is also used, I have to set all option in both side not to cause OptionParser::InvalidOption. it is bored.
Then, secondly I thought I have to create interface class of command parser and use this interface in fluentd side. like this

opt_parser = ServerEngine::OptParser.new

But I did't want to oblige users to do like above because it is too bored and not cool.

Then at last, I used ENV unwillingly.
If I should do in other way, I'll do.

@naritta
Copy link
Contributor Author

naritta commented Oct 6, 2015

Thank you very much! I love fluentd and I'm very glad to be able to get involved this project!

Firstly the API to use Socket manager is here.

#get socket manager in a spawned worker
socket_manager = ServerEngine::SocketManager.new_socket_manager

Then from this Socket Manager, users can create TCP or UDP sockets in spawned worker like this.

#get FD from socket manager
fd = socket_manager.get_tcp(bind, port)
#create listen socket from FD
lsock = TCPServer.for_fd(fd.to_i)

I implemented the API in Socket Manager class,
But if you think that I should make the API more explicit and create class for API, I'll make the class for API.

@frsyuki
Copy link
Member

frsyuki commented Oct 8, 2015

@naritta thank you. how does parent process create SocketManager::Server? how does child process get UNIX socket?

@frsyuki
Copy link
Member

frsyuki commented Oct 8, 2015

I think that there is a race condition in fundamental design. Do you consider this scenario?:

  1. there are 1 parent process "P" and 2 child processes "A" and "B".
  2. A calls P's socket_fd method using DRb
  3. P creates a TCP socket for A, and sends it using send_io method.
  4. B calls P's socket_fd method using DRb
  5. P creates a TCP socket for B, and sends it using send_io method.
  6. B calls recv_io
  7. A calls recv_io

This rarely happens but its impact is significant if it happens.

@naritta
Copy link
Contributor Author

naritta commented Nov 3, 2015

Thank you. How parent process create SocketManager::Server and child process get UNIX socket is below.

  1. Just before spawning, parent process create SocketManager::Server instance and Process manager get it.
    https://github.com/fluent/serverengine/pull/25/files#diff-704e22e4c891dcc6b47ba7d7ecdc73f8R56
  2. Then Process manager create pair of UNIX socket with Socket Manager function new_unix_socket and set FD for one of this in env for spawn.
    Each FD will be peculiar for each child processes because of env of spawn.
    https://github.com/fluent/serverengine/pull/25/files#diff-1e5e735f236537f81cca69fb746da179R167
  3. In each child process, it will get the UNIX socket from env.
    https://github.com/fluent/serverengine/pull/25/files#diff-e729c34d906c2f04e3c6849163091222R35

@naritta
Copy link
Contributor Author

naritta commented Nov 3, 2015

For the race condition problem between processes, I fixed the design to use different Unix Socket for every processes.
naritta@9faaf4a

the problem between processes should be solved by this change and I think another problem in a processes (with various plugins) will not happen because in one process it creates socket synchronously in order. (For example, assuming setting in_forward and in_http, race condition will not happen between in_forward and in_http in same one process.)
If this thought is wrong, I'll fix.

@frsyuki
Copy link
Member

frsyuki commented Nov 11, 2015

Would you mind if you continue development based on https://github.com/fluent/serverengine/tree/socket-manager-api branch? It is based on your #26 branch, and implements SocketManager as a stand-alone library which isn't tightly coupled with ProcessManager.

As the commit message explains (d7c632c), it doesn't use DRb. And it creates socket (or named pipe) pair for each requests. So it simplifies following issues I was going to mention:

  • Inter-thread race condition is significant issue because ServerEngine can't assume that fluentd is the only user of it.
  • SocketManager API should be loosely coupled from ProcessManager API to make things easy to understand & easy to use, or tightly coupled even with MultiProcessServer to let users assume that SocketManager is a part of standard features of ServerEngine. To go with the first idea, using UNIXSocket.pair is making API design difficult because socket pairs need to be passed to Process.spawn. But SocketManager API includes so much side effects (such as starting DRb server) to initialize it regardless of config options.
  • Affecting to DRb servers running beside SocketManager are affected by SocketManager.
  • Shutting down DRb server is tricky when multiple SocketManager instances are created.

I confirmed that above branch can complete simple test cases I added. But I have not tried Windows implementation. And as you can see TODO comments, its implementation is not completely working code yet (so CI on AppVeyor doesn't pass). So I would like to rely on you to improve the Windows support.

With MultiSpawnServer (for fluentd, maybe), you can create SocketManager::Server and set socket path (SocketManager::Server#path) to an environment variable at server module (Server#before_run). On a child process (Worker#run), you can create SocketManager::Client with the path taken from environment variable (SocketManager::Client.new(ENV[...]])). You must close SocketManager::Server (at Server::after_run) but don't have to close SocketManager::Client.

@frsyuki frsyuki closed this Nov 11, 2015
@naritta
Copy link
Contributor Author

naritta commented Nov 11, 2015

I got it. Thank you for the reviewing!

@frsyuki
Copy link
Member

frsyuki commented Nov 13, 2015

@naritta If you're struggling to wrap a socket handle in a TCPServer or UDPSocket, you may try to call int rb_w32_wrap_io_handle(SOCKET s, int flags) function defined in ruby itself (ruby-2.2.2/win32/win32.c). I think you can call it using ffi. It returns a fd. Then you may use TCPServer.for_fd or UDPSocket.for_fd.

the other way is also available that extracts SOCKET from a TCPServer or UDPSocket (SOCKET rb_w32_get_osfhandle(socket_or_server.fileno).

@naritta
Copy link
Contributor Author

naritta commented Nov 19, 2015

Thank you very much for teaching me!
This is the information that I really wanted!!!
Finally, I've succeeded in wrapping a socket handle in a TCPServer.
(Before I know this, I used _open_osfhandle and wrapped in IO class.)
Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants