Some operations should not make the user wait. When someone signs up, they need their account created instantly, but the welcome email can be sent a few seconds later. When a file is uploaded, the response should confirm receipt immediately, while processing happens in the background. FastAPI has a built-in mechanism for this, but it has hard limits that AI rarely explains.
How BackgroundTasks works
FastAPI's BackgroundTasks lets you schedule a function to run after the response is sent. The client gets their response immediately, and your function runs in the background within the same process.
from fastapi import FastAPI, BackgroundTasks
app = FastAPI()
def send_welcome_email(email: str, username: str):
# This runs AFTER the response is sent
print(f"Sending welcome email to {email}")
# In reality: call your email service here
@app.post("/register")
def register(
email: str,
username: str,
background_tasks: BackgroundTasks
):
# 1. Create user account (fast, user waits for this)
user = create_user(email, username)
# 2. Schedule email (user does NOT wait for this)
background_tasks.add_task(send_welcome_email, email, username)
# 3. Response sent immediately
return {"message": "Account created", "user_id": user.id}The flow is:
- Request comes in
- Your endpointWhat is endpoint?A specific URL path on a server that handles a particular type of request, like GET /api/users. runs: creates the user, schedules the background task
- Response is sent to the client
- Then the background task runs
The user sees a fast response. The email sends a moment later. Everyone is happy.
Adding multiple tasks
You can schedule multiple background tasks from a single endpointWhat is endpoint?A specific URL path on a server that handles a particular type of request, like GET /api/users.. They run in the order you add them:
@app.post("/orders/{order_id}/confirm")
def confirm_order(
order_id: int,
background_tasks: BackgroundTasks
):
order = get_order(order_id)
order.status = "confirmed"
save_order(order)
# All of these run after the response, in order
background_tasks.add_task(send_confirmation_email, order)
background_tasks.add_task(notify_warehouse, order)
background_tasks.add_task(update_analytics, order_id)
return {"status": "confirmed"}Background tasks in dependencies
Background tasks also work inside dependencies, which is useful for cross-cutting concerns like audit logging:
from fastapi import Depends
def audit_log(background_tasks: BackgroundTasks):
# This dependency adds a background task
def log_action(action: str, user_id: int):
background_tasks.add_task(write_audit_log, action, user_id)
return log_action
@app.delete("/items/{item_id}")
def delete_item(
item_id: int,
log: callable = Depends(audit_log)
):
remove_item(item_id)
log("delete_item", item_id)
return {"deleted": True}The audit log writes to the database (or a file, or an external service) without slowing down the delete response.
The limitations AI does not mention
Here is where things get important. BackgroundTasks has hard constraints that AI almost never explains:
No persistence
Background tasks live in memory. If your server restarts, a deployment, a crash, an out-of-memory kill, every pending task disappears. There is no retry. There is no record that the task ever existed.
# This task will be LOST if the server restarts
background_tasks.add_task(process_uploaded_video, video_id)
# If it was processing a 2GB video and the server restarted
# at 99%... too bad. No retry. No resume. Gone.No result tracking
You cannot check if a background task succeeded, failed, or is still running. There is no task ID, no status endpointWhat is endpoint?A specific URL path on a server that handles a particular type of request, like GET /api/users., no callbackWhat is callback?A function you pass into another function to be called later, often when an operation finishes or an event occurs..
# You have no way to know if this succeeded
background_tasks.add_task(generate_report, report_id)
# Did it finish? Did it error? You have no idea.No concurrencyWhat is concurrency?The ability of a program to handle multiple tasks at the same time, like serving thousands of users without slowing down. control
Background tasks run in the same process as your web server. If you schedule a CPU-heavy task, it blocks your event loopWhat is event loop?The mechanism that lets Node.js handle many operations on a single thread by delegating slow tasks and processing their results when ready. and slows down all other requests.
# This will block your entire server
background_tasks.add_task(resize_1000_images, image_list)Same process, same resources
Your background tasks share memory, CPU, and database connections with your request handlers. A background task that consumes too much memory can crash your entire APIWhat is api?A set of rules that lets one program talk to another, usually over the internet, by sending requests and getting responses..
BackgroundTasks because it is simple and built-in. When you ask "how do I process uploads in the background," AI will almost always reach for BackgroundTasks, even when the task is a 10-minute video transcoding job that needs retry logic and progress tracking. Knowing when BackgroundTasks is wrong is more important than knowing how to use it.When to use BackgroundTasks vs a task queue
This decision matrix is what AI should suggest but rarely does:
| Scenario | BackgroundTasks | Task queue (Celery/ARQ) |
|---|---|---|
| Send a confirmation email | Yes | Overkill |
| Write an audit log entry | Yes | Overkill |
| Generate a thumbnail from an uploaded image | Borderline | Better choice |
| Process a 10GB video upload | No | Yes |
| Generate a complex PDF report (30+ seconds) | No | Yes |
| Rebuild a search index | No | Yes |
| Any task that must not be lost | No | Yes |
| Any task that needs retry on failure | No | Yes |
| Any task that needs progress tracking | No | Yes |
The rule of thumb: if the task takes under 5 seconds and it is acceptable to lose it on server restart, BackgroundTasks is fine. Anything else needs a real task queue.
What a proper task queue looks like
When BackgroundTasks is not enough, you need a dedicated task queue. The most common choices in the Python ecosystem:
| Tool | Best for | Broker |
|---|---|---|
| Celery | Full-featured, battle-tested, complex | Redis or RabbitMQ |
| ARQ | Async-native, lightweight, FastAPI-friendly | Redis |
| Dramatiq | Simpler than Celery, good defaults | Redis or RabbitMQ |
| Huey | Minimal, SQLite support | Redis or SQLite |
Here is what a task queue gives you that BackgroundTasks does not:
# With ARQ (async task queue)
# tasks.py
async def process_video(ctx, video_id: int):
video = await get_video(video_id)
await transcode(video) # Takes 10 minutes
await update_status(video_id, "ready")
# router
@router.post("/videos/upload")
async def upload_video(file: UploadFile, redis=Depends(get_redis)):
video_id = await save_video(file)
job = await redis.enqueue_job("process_video", video_id)
return {"video_id": video_id, "job_id": job.job_id}
@router.get("/videos/{video_id}/status")
async def video_status(video_id: int, redis=Depends(get_redis)):
job = await Job.find(redis, video_id)
return {"status": job.status} # queued, in_progress, complete, failedThe task is persisted in Redis. If the worker crashes, the task is retried. You can check its status. You can run multiple workers to process tasks in parallel. This is what "production-ready" background processing looks like.
The architecture decision
When reviewing AI-generated code that uses BackgroundTasks, ask yourself:
- What happens if this task fails? If the answer is "nothing important",
BackgroundTasksis fine. If the answer is "a user does not get their paid content", you need a task queue. - How long does this task take? Under 5 seconds and fire-and-forget?
BackgroundTasks. Over 5 seconds or needs monitoring? Task queue. - Does someone need to know the result? No?
BackgroundTasks. Yes? Task queue with status tracking.
This is not about the technology, it is about the consequences of failure. AI does not think about consequences. You do.