if you're planning on putting a django celery app into heavy production use, your message queue matters. At SimpleRelevance we used RabbitMQ for months, since it's probably the most famous of the available options.
It took about 8 seconds to restart out celery workers, but that was fine. In general it was fairly reliable, and the way it routed tasks to different celery workers on different boxes often felt like magic.
However, getting results of a task back using rabbit as a result backend was a different story - it often took minutes, even hanging the box it was on. And these weren't big results either.
So for the record here, we switched to Redis. Not only is restarting about 3X faster, but as a results backend it also wins - no more hanging, and results come back as soon as they're ready. My sysops also tells me it was much easier to install and configure.
boom.
----
update!
actually it turns out redis starts to perform very badly when faced with a deep queue in our production environment. So the optimal setup for us turns out to be RabbitMQ for the queue handling, and Redis for the result backend.
When using redis were you able to guarantee message delivery with ack emulation in the case of a worker crashing?
ReplyDeletenope - didn't even try!
ReplyDeleteby "worker crashing" are you referring to a worker job failing (ie bug in code, timeout error, etc) or to celery/rabbit crapping out on a deeper level (network error, etc)?