Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tornado like interface for RQ Workers #376

Closed
JohnSundarraj opened this issue Jul 2, 2014 · 2 comments
Closed

Tornado like interface for RQ Workers #376

JohnSundarraj opened this issue Jul 2, 2014 · 2 comments

Comments

@JohnSundarraj
Copy link

First of all, i would like to thank @nvie for this wonderful opensource project. I've been playing around in combining RQ and Tornado. I'm successful on implementing RQ with Tornado.

I want to know if it is possible to use global Database connection objects for a Worker process. Let me show in code.

# tornado-init.py
class Application(tornado.web.Application):
    def __init__(self):
        handlers = [
            (r"/", HomeHandler),
            (r"/archive", ArchiveHandler),
            (r"/feed", FeedHandler),
            (r"/entry/([^/]+)", EntryHandler),
            (r"/compose", ComposeHandler),
            (r"/auth/login", AuthLoginHandler),
            (r"/auth/logout", AuthLogoutHandler),
        ]
        settings = dict(
            blog_title=u"Tornado Blog",
            template_path=os.path.join(os.path.dirname(__file__), "templates"),
            static_path=os.path.join(os.path.dirname(__file__), "static"),
            ui_modules={"Entry": EntryModule},
            xsrf_cookies=True,
            cookie_secret="__TODO:_GENERATE_YOUR_OWN_RANDOM_VALUE_HERE__",
            login_url="/auth/login",
            debug=True,
        )
        tornado.web.Application.__init__(self, handlers, **settings)

        # Have one global connection to the blog DB across all handlers
        self.db = torndb.Connection(
            host=options.mysql_host, database=options.mysql_database,
            user=options.mysql_user, password=options.mysql_password)

def main():
    tornado.options.parse_command_line()
    http_server = tornado.httpserver.HTTPServer(Application())
    http_server.listen(options.port)
    tornado.ioloop.IOLoop.instance().start()


if __name__ == "__main__":
    main()

From http_server = tornado.httpserver.HTTPServer(Application()) we can see the Application instance created once, and is global for all handlers. The mysql db connection self.db can be used within the handlers as self.application.db. I don't know how to achieve the same in RQ Workers.

Is it possible to use global db connections when executing jobs using RQ Workers....? If so, how to use the connection object within a classmethod. I'm trying to achieve something like below...

# worker-init.py
import os

import DB import Redis,MySQL
from rq import Worker, Queue, Connection

listen = ['high', 'default', 'low']

class WorkerDB():

    def __init__(self):
        self.MySQL = MySQL()
        self.Redis = Redis()

if __name__ == '__main__':
    with Connection(Redis()):
        worker = Worker(map(Queue,listen))
        worker.work(WorkerDB())

# MyClass.py
class MyClass(object):
    @classmethod
    def mymethod(self,WorkerDB):
        # Here is should be able to get the same WorkerDB object passed to Worker.work()
        self.MySQL = WorkerDB.MySQL

If this cannot be done, everytime the Worker creates new connects to Redis and MySQL which becomes ineffective and can be a bottleneck. I checked it using the command redis-cli info | grep connections.

One thing i can do, use singleton to overcome this issue, but i dont this it the right way. Im eagerly seeking solution for this.

Thanks in advance. I'm looking forward that someone will reply for this issue.

@JohnSundarraj
Copy link
Author

Came to know because of forking behaviour of RQ, it creates new connection to DB everytime. So fixed the issue using Gevent and singleton DB wrappers. Thanks for the gists,
https://gist.github.com/jhorman/e16ed695845fca683057
https://gist.github.com/lechup/d886e89490b2f6c737d7

@ccrvlh
Copy link
Collaborator

ccrvlh commented Feb 2, 2023

For anyone looking at this in the future: the one to handle this now is to customize the worker, or use the SimpleWorker, which doesn't fork.

@ccrvlh ccrvlh closed this as completed Feb 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants