I am pretty new to celery and django in general so please excuse my lack of knowledge. I am trying to run a test to do some calculations and wait for the test to finish so that I can make sure it is done for the correct answers.
Here is what i…
Here's my setup:
django 1.3
celery 2.2.6
django-celery 2.2.4
djkombu 0.9.2
In my settings.py file I have
BROKER_BACKEND = "djkombu.transport.DatabaseTransport"
i.e. I'm just using the database to queue tasks.
Now on to my problem: I have a…
Is there a way to determine if any task is lost and retry it?
I think that the reason for lost can be dispatcher bug or worker thread crash.
I was planning to retry them but I'm not sure how to determine which tasks need to be retired?
And how…
How do I ensure sub-processes are stopped when I stop Supervisord?
I'm using Supervisord to run two Celery workers. The command for each worker is:
command=/usr/local/myapp/src/manage.py celery worker --concurrency=1 --loglevel=INFO
When I start…
I have a small script that enqueues tasks for processing. This script makes a whole lot of database queries to get the items that should be enqueued. The issue I'm facing is that the celery workers begin picking up the tasks as soon as it is…
I have a celery chain that runs some tasks. Each of the tasks can fail and be retried. Please see below for a quick example:
from celery import task
@task(ignore_result=True)
def add(x, y, fail=True):
try:
if fail:
raise…
Some of the tasks in my code were taking longer and longer to execute.
Upon inspection I noticed that although I have my worker node set to concurrency 6, and 6 processes exist to 'do work', but only 1 task is shown under 'running tasks'. Here is a…
I am using celery on rabbitmq. I have been sending thousands of messages to the queue and they are being processed successfully and everything is working just fine. However, the number of messages in several rabbitmq queues are growing quite large…
I am trying to use django-celery in my project
In settings.py I have
CELERY_RESULT_BACKEND = "amqp"
The server started fine with
python manage.py celeryd --setting=settings
But if I want to access a result from a delayed task, I get the following…
I'm working on a project using django and celery(django-celery). Our team decided to wrap all data access code within (app-name)/manager.py(NOT wrap into Managers like the django way), and let code in (app-name)/task.py only dealing with assemble…
I am working on a django project for racing event in which a table in the database has three fields.
1)Boolean field to know whether race is active or not
2)Race start time
3)Race end time
While creating an object of it,the start_time and end_time…
How do you diagnose why manage.py celerybeat won't execute any tasks?
I'm running celerybeat via supervisord with the command:
/usr/local/myapp/src/manage.py celerybeat --schedule=/tmp/celerybeat-schedule-myapp --pidfile=/tmp/celerybeat-myapp.pid…
Why is a model instance I've created, when queried from a celery task started directly afterwards, not found? For example:
# app.views
model = Model.objects.create() # I create my lovely model in a view
from app.tasks import ModelTask # I…
I am using the following stack:
Python 3.6
Celery v4.2.1 (Broker: RabbitMQ v3.6.0)
Django v2.0.4.
According Celery's documentation, running scheduled tasks on different queues should be as easy as defining the corresponding queues for the tasks on…
After updating celery and django-celery to 3.1:
$ pip freeze | grep celery
celery==3.1.18
django-celery==3.1.16
I run into this error when starting my server:
Traceback (most recent call last):
File "app/manage.py", line 16, in
…