Course:Node.js & Express/
Lesson

"It works on my machine" is a real problem in production environments. DockerWhat is docker?A tool that packages your application and all its dependencies into a portable container that runs identically on any machine. solves it by bundling your app, its Node.js version, and all its dependencies into a single image that runs identically on your laptop, a CI server, or a cloud providerWhat is provider?A wrapper component that makes data available to all components nested inside it without passing props manually.. This lesson covers everything you need to go from code to a running, production-ready containerWhat is container?A lightweight, portable package that bundles your application code with all its dependencies so it runs identically on any machine..

Writing a production Dockerfile

A naive Dockerfile copies everything into one layer and runs as root. The production version uses multi-stage builds to keep the final image small, and runs as an unprivileged user for security.

# Stage 1: Install dependencies
FROM node:20-alpine AS builder

WORKDIR /app

COPY package*.json ./
RUN npm ci --only=production

# Stage 2: Production image
FROM node:20-alpine

# Create a non-root user and group
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nodejs -u 1001

WORKDIR /app

# Copy only what we need from the build stage
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --chown=nodejs:nodejs . .

USER nodejs

EXPOSE 3000

# Health check - Docker will mark the container unhealthy if this fails
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD node healthcheck.js

CMD ["node", "index.js"]
Use `node
20-alpine instead of node:20`. The Alpine Linux base image is around 50MB vs 900MB for the full Debian image. Your CI pipelines will thank you when they're not downloading a gigabyte on every build.

Why multi-stage builds matter

ApproachImage sizeDev tools in production
Single stage (naive)900MB+Yes (risky)
Single stage (alpine)~200MBYes
Multi-stage (alpine)~100MBNo (correct)

The builder stage can install compilers, test frameworks, and other dev tools freely. None of that ends up in your final image because the second FROM starts fresh and you COPY --from=builder only the artifacts you need.

02

Docker ComposeWhat is docker compose?A tool that lets you define and run multi-container applications from a single YAML file. One command starts your entire stack. for local development

In production you'll likely use Kubernetes or a managed containerWhat is container?A lightweight, portable package that bundles your application code with all its dependencies so it runs identically on any machine. service, but Docker Compose is invaluable for spinning up your full stack locally with one command.

yaml
# docker-compose.yml
version: '3.8'

services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=production
      - DATABASE_URL=postgresql://postgres:password@db:5432/myapp
      - REDIS_URL=redis://redis:6379
    depends_on:
      - db
      - redis
    restart: unless-stopped

  db:
    image: postgres:15-alpine
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password
      - POSTGRES_DB=myapp
    volumes:
      - postgres_data:/var/lib/postgresql/data
    restart: unless-stopped

  redis:
    image: redis:7-alpine
    restart: unless-stopped

volumes:
  postgres_data:

Run docker compose up to start everything. Run docker compose down -v to tear it all down including the database volume.

03

Graceful shutdownWhat is graceful shutdown?Finishing all in-progress requests and closing connections cleanly before your server exits, instead of cutting off users mid-response.

When your containerWhat is container?A lightweight, portable package that bundles your application code with all its dependencies so it runs identically on any machine. receives a termination signal (SIGTERM), it has a window of time to finish what it's doing before DockerWhat is docker?A tool that packages your application and all its dependencies into a portable container that runs identically on any machine. force-kills it. If you ignore that signal, requests in flight get cut off mid-response. Graceful shutdown means you stop accepting new connections, let existing ones finish, then exit cleanly.

// index.js
import app from './app.js';
import { closeDatabase } from './db.js';

const server = app.listen(process.env.PORT || 3000, () => {
  console.log('Server started');
});

const gracefulShutdown = async (signal) => {
  console.log(`${signal} received, shutting down gracefully`);

  server.close(async () => {
    console.log('HTTP server closed');
    await closeDatabase();
    console.log('Database connection closed');
    process.exit(0);
  });

  // Force exit if graceful shutdown takes too long
  setTimeout(() => {
    console.error('Forced shutdown after timeout');
    process.exit(1);
  }, 10000);
};

process.on('SIGTERM', () => gracefulShutdown('SIGTERM'));
process.on('SIGINT', () => gracefulShutdown('SIGINT'));
Kubernetes sends SIGTERM and then waits 30 seconds (by default) before sending SIGKILL. Your graceful shutdown window is whatever terminationGracePeriodSeconds is configured to, minus a small buffer. 10 seconds is a safe target for most APIs.
04

Quick reference

ConceptCommand
Build imagedocker build -t my-app .
Run containerdocker run -p 3000:3000 my-app
Start full stackdocker compose up
Stop and removedocker compose down
View logsdocker compose logs -f app
Shell into containerdocker compose exec app sh