All Django templates

Django + Celery Background Jobs

Celery task design, idempotency, retries, and Redis as broker.

DevZone Tools1,150 copiesUpdated Feb 22, 2026DjangoPython
# CLAUDE.md — Django + Celery Background Jobs

## Setup

- One Celery app per Django project, defined in `<project>/celery.py`. Auto-discover tasks from installed apps.
- Use **Redis** as broker for most workloads; RabbitMQ if you need priority queues, dead-letter handling out of the box, or strict ordering.
- Result backend: only enable if you actually consume results. Otherwise skip — it costs writes.

## Task design

- Tasks live in `<app>/tasks.py`. Decorate with `@shared_task` (not `@app.task`) so apps stay decoupled.
- Tasks are **idempotent**. Retrying must be safe. If it can't be safe, the task is wrong.
- Pass IDs, not objects. `process_invoice(invoice_id)` — never `process_invoice(invoice)`. Pickled model instances go stale.
- Each task fetches what it needs via `Model.objects.get(...)`. If the row is gone, exit cleanly (`Model.DoesNotExist`).

## Retries

- `@shared_task(bind=True, autoretry_for=(SomeError,), retry_backoff=True, max_retries=5)`.
- Don't retry on validation failures — they'll never succeed. Retry on transient infra errors.
- Set `acks_late=True` for tasks where work loss is unacceptable. Combine with `task_reject_on_worker_lost=True`.

## Scheduled jobs

- Celery Beat owns the schedule. Define it in `CELERY_BEAT_SCHEDULE` in settings.
- Don't run Beat in multiple places — pick one host and lock it.
- For DB-backed schedules, use `django-celery-beat`. The admin UI is worth it on teams of more than two.

## Concurrency

- One queue per workload class (`fast`, `slow`, `email`, `report`). A long task should never block a fast queue.
- Set `worker_prefetch_multiplier=1` for slow tasks. The default of 4 starves long jobs.
- Workers are stateless. Don't share global state across tasks.

## Observability

- Send failure notifications via Sentry or your error tracker (`celery.signals.task_failure`).
- Tag every log line with `task_id` so you can grep one execution end-to-end.
- Health check the broker — if Redis dies, your jobs silently pile up.

## Local dev

- `eager_mode = True` in tests. Tasks run inline in the calling thread.
- For local dev, run `celery -A myproject worker -l info` in a separate terminal. Don't share the prod Redis.

## Don't

- Don't store large payloads in task arguments. Push to S3/DB and pass the key.
- Don't call tasks synchronously (`task.apply()`) from a request handler when you mean to queue. Use `task.delay()` or `task.apply_async()`.
- Don't put long-running work in HTTP handlers, even if it "usually finishes in time".
- Don't catch broad `Exception` and swallow it. Let Celery retry or fail the task.

Other Django templates