2

In development, flask-socketio (4.1.0) with uwsgi is working nicely with just 1 worker and standard initialization.

Now I'm preparing for production and want to make it work with multiple workers.

I've done the following:

Added redis message_queue in init_app:

socketio = SocketIO()
socketio.init_app(app,async_mode='gevent_uwsgi', message_queue=app.config['SOCKETIO_MESSAGE_QUEUE'])

(Sidenote: we are using redis in the app itself as well)

gevent monkey patching at top of the file that we run with uwsgi

from gevent import monkey
monkey.patch_all()

run uwsgi with:

uwsgi --http 0.0.0.0:63000 --gevent 1000 --http-websockets --master --wsgi-file rest.py --callable application --py-autoreload 1 --gevent-monkey-patch --workers 4 --threads 1

This doesn't seem to work. The connection starts rapidly alternating between a connection and 400 Bad request responses. I suspect these correspond to the ' Invalid session ....' errors I see when I enable SocketIO logging.

Initially it was not using redis at all,

redis-cli > PUBSUB CHANNELS *

resulted in an empty result even with workers=1.

it seemed the following (taken from another SO answer) fixed that:

# https://stackoverflow.com/a/19117266/492148
import gevent
import redis.connection
redis.connection.socket = gevent.socket

after doing so I got a "flask-socketio" pubsub channel with updating data.

but after returning to multiple workers, the issue returned. Given that changing the redis socket did seem to bring things in the right direction I feel like the monkeypatching isn't working properly yet, but the code I used seems to match all examples I can find and is at the very top of the file that is loaded by uwsgi.

Michielvv
  • 306
  • 2
  • 12

2 Answers2

1

Eventually found https://github.com/miguelgrinberg/Flask-SocketIO/issues/535

so it seems you can't have multiple workers with uwsgi either as it needs sticky sessions. Documentation mentions that for gunicorn, but I did not interpret that to extend to uwsgi.

Michielvv
  • 306
  • 2
  • 12
1

You can run as many workers as you like, but only if you run each worker as a standalone single-worker uwsgi process. Once you have all those workers running each on its own port, you can put nginx in front to load balance using sticky sessions. And of course you also need the message queue for the workers to use when coordinating broadcasts.

Miguel
  • 56,635
  • 12
  • 113
  • 132
  • Yes, I understand it now. What threw me off was that in the documentation for using gunicorn it explicitly mentions the w=1 and that the load balancing algorithm is too simple to work properly. I erroneously assumed that not having those mentions for uwsgi meant it would work for uwsgi. I guess it's not feasible to move all request context to e.g. Redis? So you can do without sticky sessions? – Michielvv Aug 02 '19 at 11:10
  • This has been asked many times. It's a tricky problem, because at any given time there can be two outstanding requests from a client. Without sticky sessions, these two requests will likely end up on different servers. If the state that is maintained for each client is on redis and needs to be accessed from multiple processes, then it becomes a locking nightmare, with big performance implications. – Miguel Aug 02 '19 at 15:20