Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WebSocket should be pickable #216

Closed
manatlan opened this issue Nov 13, 2018 · 18 comments
Closed

WebSocket should be pickable #216

manatlan opened this issue Nov 13, 2018 · 18 comments

Comments

@manatlan
Copy link

Currently, the WebSocket is not pickable, so it's impossible to share clients of the WebSocket in a multiple worker environment.

@tomchristie
Copy link
Member

It’s not clear what you’re trying to do exactly, but either way I’ve no interest in supporting pickle. Language specific serialisation formats aren’t a good idea for anything much.

@manatlan
Copy link
Author

A common pattern, when accepting a ws client, is to maintain a list of all "ws clients" on serverside :

think a "chat service" thru websocket, the server side could be :

ws.accept()
clients.append( ws)
while 1:
    txt = await ws.receive_text()       # recept something
    for client in clients:                     # and broadcast it to everybody 
        await client.send_text( txt )

It works well with one worker(process). But if you spawn more than one worker with gunicorn : This clients list will not be shared with the others process ... so each process has its own list of clients.

if WebSocket was pickable, it could be possible, for each process to unpickle/pickle this list before use (and make it sharable between processes) ... (AFAIK redis/memcache use pickle to save states of objects, no ?)

@tomchristie
Copy link
Member

For multi-worker we’ll want to use “broadcast” channels. Eg redis PUB/SUB or Postgres LISTEN/NOTIFY. Each worker will track its own connections, and listen for broadcast messages to send out. Have a look at Django channels to see how this sort of setup will work. That way it’s not just multi-worker, but multi-host.

@manatlan
Copy link
Author

Looks great ! thanks for this advice ! (But it could be overbloated for simple needs, no ?)
but I like the concept

@tomchristie
Copy link
Member

For single-host deployments we could provide a shared-memory broadcast backend.

@tomchristie
Copy link
Member

You still need to deal with restarting processes, and you also want to be able to expand out if needed, so it’s still the start approach to take.

@manatlan
Copy link
Author

Do you know a simple example of "shared-memory broadcast backend" ? (an url ?)

@tomchristie
Copy link
Member

I’d suggest starting with redis pub/sub

@tomchristie
Copy link
Member

That’ll likely be the easiest thing to integrate.

@manatlan
Copy link
Author

Thanks a lot, tom !
everybody wants me to go to redis pub/sub ... now : i'll go ;-)
I will take a look !

@manatlan
Copy link
Author

Thanks tom, I ended with :

    async def loopPubSub():
        while ws.client_state == WebSocketState.CONNECTED:        
            message = events.get_message()
            if message and message["type"]=="message":
                await ws.send_text( message["data"].decode() )
            await asyncio.sleep(0.001)
                
    async def loopWS():
        while ws.client_state == WebSocketState.CONNECTED:
            o = await ws.receive_text()
            r.publish('chan:recept', o)  

    t1=asyncio.ensure_future(loopPubSub())
    t2=asyncio.ensure_future(loopWS())
    await asyncio.wait( [t1, t2] )

it works like a charm !

@tomchristie
Copy link
Member

Wonderful, thanks for the update. :)

@manatlan
Copy link
Author

manatlan commented Nov 14, 2018

BTW, I really think that uvicorn should provide a redis-like (an in-memory-db) ...
starlette & starlette's app could use uvicorn's in-memory-db.
redis is a wonderful tool, but overbloated for common needs.
and starlette needs tool like that, for sharing things between workers.
And the best place seems in uvicorn, in its main loop.
So if it's in uvicorn, starlette's app will not need of redis, and could have no dependancies, just uvicorn.
A "simple thing" like radish (hello @maximdanilchenko ) could do the trick !

@tomchristie
Copy link
Member

Noted. Will consider all this when I get onto #133.

@manatlan
Copy link
Author

python comes with multiprocessing.connection module
it could be helpful

@manatlan
Copy link
Author

manatlan commented Nov 14, 2018

btw, I have created a POC ... my vision of a simple redis-like : redys (asyncio compliant)
just to understand the whole concept behind ...

@ChronoDK
Copy link

Is a pubsub system still the recommended way of dealing with the "list of all ws clients" problem in a multi process application? As long as Websocket isn't pickable, there would be no way to create a shared cache right?

@wsdev-co
Copy link

wsdev-co commented Sep 2, 2022

Is a pubsub system still the recommended way of dealing with the "list of all ws clients" problem in a multi process application? As long as Websocket isn't pickable, there would be no way to create a shared cache right?

I am still researching the ways to create a global cache. Not found yet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants