FastAPI/
Lesson

In the previous lesson you learned that Depends() can inject anything into an endpointWhat is endpoint?A specific URL path on a server that handles a particular type of request, like GET /api/users.. The most common thing to inject in a real application is a database sessionWhat is session?A server-side record that tracks a logged-in user. The browser holds only a session ID in a cookie, and the server looks up the full data on each request.. Getting this dependencyWhat is dependency?A piece of code written by someone else that your project needs to work. Think of it as a building block you import instead of writing yourself. right determines whether your app handles 10 concurrent users or 10,000, and getting it wrong is one of the most common bugs in AI-generated FastAPI code.

The get_db pattern

Every FastAPI tutorial and every AI code generator produces some version of this pattern. It is the standard way to manage database connections:

from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, Session
from fastapi import Depends

DATABASE_URL = "postgresql://user:password@localhost/mydb"

engine = create_engine(DATABASE_URL)
SessionLocal = sessionmaker(bind=engine, autocommit=False, autoflush=False)

def get_db():
    db = SessionLocal()
    try:
        yield db
    finally:
        db.close()

Three things happen here:

  1. db = SessionLocal(): a new database sessionWhat is session?A server-side record that tracks a logged-in user. The browser holds only a session ID in a cookie, and the server looks up the full data on each request. is created (a connection is pulled from the pool).
  2. yield db: the session is handed to the endpointWhat is endpoint?A specific URL path on a server that handles a particular type of request, like GET /api/users.. The function pauses here while the endpoint runs.
  3. db.close(): after the endpoint finishes (or raises an exception), the session is closed and the connection returns to the pool.

The yield keyword is what makes this a generator-based dependencyWhat is dependency?A piece of code written by someone else that your project needs to work. Think of it as a building block you import instead of writing yourself.. FastAPI treats it specially: everything before yield is setup, and everything after is teardown. The finally block guarantees cleanup even if the endpoint crashes.

@app.get("/users")
def list_users(db: Session = Depends(get_db)):
    users = db.query(User).all()
    return users
# After this function returns, get_db resumes and closes the session
02

Why yield matters

To understand why yield is essential, consider what happens without it:

# WRONG: no yield, no cleanup
def get_db_broken():
    db = SessionLocal()
    return db
    # db.close() is never called

This works in development. Your endpointWhat is endpoint?A specific URL path on a server that handles a particular type of request, like GET /api/users. gets a sessionWhat is session?A server-side record that tracks a logged-in user. The browser holds only a session ID in a cookie, and the server looks up the full data on each request., runs queries, and returns a response. But the session is never closed. The database connection sits in limbo until Python's garbage collectorWhat is garbage collector?An automatic cleanup system built into a language that frees up memory your program no longer uses, so you don't have to do it manually. eventually destroys the db object, which might take seconds, minutes, or never happen if there are lingering references.

Under load, this kills your app. Each request opens a new connection. The connection poolWhat is connection pool?A set of pre-opened database connections that your app reuses instead of opening and closing a new one for every request. fills up. New requests block, waiting for a connection. After the pool timeout expires, you get:

sqlalchemy.exc.TimeoutError: QueuePool limit of size 5 overflow 10 reached,
connection timed out, timeout 30

Your APIWhat is api?A set of rules that lets one program talk to another, usually over the internet, by sending requests and getting responses. returns 500 errors. Users see timeouts. And the root cause is a single missing yield.

AI pitfall
This is one of the most common bugs in AI-generated FastAPI code. AI frequently generates get_db functions that use return instead of yield. The code passes every test (tests use few connections), passes code review (it looks correct at a glance), and only fails in production under real traffic. Always check for yield and finally: db.close().
03

Connection pooling

The create_engine function manages a connection poolWhat is connection pool?A set of pre-opened database connections that your app reuses instead of opening and closing a new one for every request., a set of reusable database connections. Understanding pool settings is essential for production apps.

engine = create_engine(
    DATABASE_URL,
    pool_size=5,         # Keep 5 connections open permanently
    max_overflow=10,     # Allow up to 10 extra connections during spikes
    pool_timeout=30,     # Wait 30 seconds for a connection before erroring
    pool_recycle=1800,   # Recycle connections after 30 minutes (prevents stale connections)
    pool_pre_ping=True   # Check if connection is alive before using it
)
SettingDefaultWhat it controls
pool_size5Permanent connections kept open
max_overflow10Extra connections allowed during spikes
pool_timeout30Seconds to wait before TimeoutError
pool_recycle-1 (never)Seconds before a connection is recycled
pool_pre_pingFalseHealth-check connections before use

The total maximum connections your app can open is pool_size + max_overflow. With defaults, that is 15 connections. If your APIWhat is api?A set of rules that lets one program talk to another, usually over the internet, by sending requests and getting responses. serves 100 concurrent requests and each holds a connection for 200ms, you need roughly 20 connections. If you are under that number, requests queue up.

Good to know
pool_pre_ping=True adds a tiny overhead (one extra query per connection checkout) but prevents a nasty class of errors where the database server restarts and your app tries to use dead connections. Always enable it in production.
04

The engine-per-request mistake

Another pattern AI sometimes generates that causes silent performance problems:

# WRONG: creates a new engine (and pool) on every request
def get_db():
    engine = create_engine(DATABASE_URL)
    SessionLocal = sessionmaker(bind=engine)
    db = SessionLocal()
    try:
        yield db
    finally:
        db.close()

This code creates a brand new connection poolWhat is connection pool?A set of pre-opened database connections that your app reuses instead of opening and closing a new one for every request. for every single request, completely defeating the purpose of pooling. Each request opens a fresh TCP connection to the database, does its query, and then the pool (with its single used connection) gets garbage collected.

The engine should be created once at moduleWhat is module?A self-contained file of code with its own scope that explicitly exports values for other files to import, preventing name collisions. level, not inside the dependencyWhat is dependency?A piece of code written by someone else that your project needs to work. Think of it as a building block you import instead of writing yourself. function. The dependency only creates sessions, the engine and sessionWhat is session?A server-side record that tracks a logged-in user. The browser holds only a session ID in a cookie, and the server looks up the full data on each request. factory are shared across all requests.

# CORRECT: engine created once, sessions created per request
engine = create_engine(DATABASE_URL, pool_pre_ping=True)
SessionLocal = sessionmaker(bind=engine)

def get_db():
    db = SessionLocal()
    try:
        yield db
    finally:
        db.close()
05

Async database sessions

If your FastAPI app uses async def endpoints, you should use async database sessions for optimal performance. The pattern is nearly identical but uses AsyncSession and create_async_engine:

from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
from sqlalchemy.orm import sessionmaker

# Note: async drivers use different URL schemes
ASYNC_DATABASE_URL = "postgresql+asyncpg://user:password@localhost/mydb"

async_engine = create_async_engine(ASYNC_DATABASE_URL, pool_pre_ping=True)
AsyncSessionLocal = sessionmaker(
    bind=async_engine, class_=AsyncSession, expire_on_commit=False
)

async def get_db():
    async with AsyncSessionLocal() as db:
        yield db

The async with statement handles both opening and closing. When the endpointWhat is endpoint?A specific URL path on a server that handles a particular type of request, like GET /api/users. finishes, the async with block exits and the sessionWhat is session?A server-side record that tracks a logged-in user. The browser holds only a session ID in a cookie, and the server looks up the full data on each request. is properly closed. This is cleaner than the manual try/finally pattern, but both achieve the same goal.

AI pitfall
AI sometimes mixes sync engines with async endpoints or vice versa. If your endpoint is async def, your database session should also be async. Using a sync session inside an async def endpoint blocks the event loop, every other request waits while your query runs.
06

Transactions in dependencies

The get_db pattern gives each endpointWhat is endpoint?A specific URL path on a server that handles a particular type of request, like GET /api/users. a sessionWhat is session?A server-side record that tracks a logged-in user. The browser holds only a session ID in a cookie, and the server looks up the full data on each request., but it does not handle transactions explicitly. You have two options:

# Option 1: auto-commit with explicit commits in the endpoint
def get_db():
    db = SessionLocal()
    try:
        yield db
    finally:
        db.close()

@app.post("/users")
def create_user(user: UserCreate, db: Session = Depends(get_db)):
    new_user = User(**user.dict())
    db.add(new_user)
    db.commit()       # Explicit commit
    db.refresh(new_user)
    return new_user
# Option 2: auto-commit in the dependency (commit on success, rollback on error)
def get_db():
    db = SessionLocal()
    try:
        yield db
        db.commit()       # Commit if endpoint succeeded
    except Exception:
        db.rollback()     # Rollback if endpoint raised
        raise
    finally:
        db.close()

Option 1 is more explicit, you see exactly where commits happen. Option 2 is more automatic, every successful request commits, every failed one rolls back. Most production apps use Option 1 because it gives endpoints fine-grained control over when to commitWhat is commit?A permanent snapshot of your staged changes saved in Git's history, identified by a unique hash and accompanied by a message describing what changed..

07

Quick reference: common mistakes

MistakeSymptomFix
return instead of yieldConnection pool exhaustion under loadUse yield + finally: db.close()
Engine created inside dependencySlow queries, high DB connection countCreate engine at module level
Sync session in async endpointEvent loop blocking, slow responsesUse AsyncSession with create_async_engine
Missing pool_pre_pingRandom OperationalError after DB restartsSet pool_pre_ping=True
No finally blockConnections leak on endpoint errorsAlways wrap yield in try/finally