DevOps Advanced

Docker Multi-Stage Builds for Django: Production-Ready Images

Build lean, secure, production-ready Django Docker images. Multi-stage builds, dependency caching, non-root users, compiled static files, and health checks that shrink images from 1.2GB to 150MB.

DjangoZen Team Apr 17, 2026 20 min read 1 views

A "works on my machine" Dockerfile and a production-grade one are worlds apart. Here's how to build Django images that are small, fast, secure, and actually run in production.

The Bad Dockerfile

You've seen this:

FROM python:3.12
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

Problems: - 1.2GB image (full Debian + dev tools) - Running as root - No build caching — every source change rebuilds everything - runserver is not for production - No static file collection - No health checks - .git, .env, node_modules leak into the image

Multi-Stage Build: The Good Dockerfile

# syntax=docker/dockerfile:1.6

# --- Stage 1: Build dependencies ---
FROM python:3.12-slim AS builder

ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1 \
    PIP_NO_CACHE_DIR=1 \
    PIP_DISABLE_PIP_VERSION_CHECK=1

# Install build deps
RUN apt-get update && apt-get install -y --no-install-recommends \
    build-essential \
    libpq-dev \
    && rm -rf /var/lib/apt/lists/*

WORKDIR /build

# Install Python deps into a venv
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

COPY requirements.txt .
RUN pip install -r requirements.txt

# --- Stage 2: Final runtime image ---
FROM python:3.12-slim AS runtime

ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1 \
    PATH="/opt/venv/bin:$PATH" \
    DJANGO_SETTINGS_MODULE=config.settings.production

# Runtime deps only (no build tools)
RUN apt-get update && apt-get install -y --no-install-recommends \
    libpq5 \
    curl \
    && rm -rf /var/lib/apt/lists/*

# Copy venv from builder
COPY --from=builder /opt/venv /opt/venv

# Create non-root user
RUN groupadd -r django && useradd -r -g django django

WORKDIR /app
COPY --chown=django:django . .

# Collect static files
RUN python manage.py collectstatic --noinput

# Switch to non-root
USER django

EXPOSE 8000

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
    CMD curl -f http://localhost:8000/health/ || exit 1

CMD ["gunicorn", "config.wsgi:application", \
     "--bind", "0.0.0.0:8000", \
     "--workers", "3", \
     "--access-logfile", "-", \
     "--error-logfile", "-"]

Result: ~150MB image, runs as non-root, proper caching, production-grade server.

.dockerignore (critical!)

Without this, COPY . . includes your .git, .venv, __pycache__, secrets:

.git
.gitignore
.dockerignore
Dockerfile*
docker-compose*.yml

__pycache__
*.pyc
*.pyo
*.pyd
.Python
.venv
venv
env

node_modules
npm-debug.log
yarn-error.log

.env
.env.*
!.env.example

*.sqlite3
*.log
.pytest_cache
.coverage
htmlcov
.tox

README.md
docs/
tests/

Dependency Caching

Docker caches layers. Structure your Dockerfile to maximize cache hits:

# Good: deps installed before code copy — code changes don't bust pip cache
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .

Use --mount=type=cache for even better caching:

# syntax=docker/dockerfile:1.6
RUN --mount=type=cache,target=/root/.cache/pip \
    pip install -r requirements.txt

Docker Compose for Development

# docker-compose.yml
services:
  web:
    build:
      context: .
      target: builder  # use builder stage for dev
    command: python manage.py runserver 0.0.0.0:8000
    volumes:
      - .:/app
    ports:
      - "8000:8000"
    env_file: .env
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_started

  db:
    image: postgres:16-alpine
    volumes:
      - postgres_data:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: myapp
      POSTGRES_PASSWORD: dev
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U myapp"]
      interval: 10s

  redis:
    image: redis:7-alpine
    volumes:
      - redis_data:/data

  celery:
    build: .
    command: celery -A config worker -l info
    env_file: .env
    depends_on:
      - redis

volumes:
  postgres_data:
  redis_data:

Production Compose

# docker-compose.prod.yml
services:
  web:
    image: myapp:${VERSION:-latest}
    restart: unless-stopped
    ports:
      - "127.0.0.1:8000:8000"
    env_file: .env.prod
    depends_on:
      - db
      - redis

  nginx:
    image: nginx:alpine
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
      - ./certs:/etc/nginx/certs:ro
      - static:/app/staticfiles:ro
    depends_on:
      - web

volumes:
  static:

Secrets Management

Don't bake secrets into images. Use:

  • Docker secrets (Swarm mode)
  • Environment files with env_file:
  • BuildKit secrets for build-time secrets:
RUN --mount=type=secret,id=pip_conf,target=/root/.pip/pip.conf \
    pip install -r requirements.txt
docker build --secret id=pip_conf,src=~/.pip/pip.conf .

Image Size Breakdown

Approach Size Pros Cons
python:3.12 1.2GB Most compatible Huge
python:3.12-slim 150MB Good balance Need to install some libs
python:3.12-alpine 80MB Tiny Compilation issues with some wheels
distroless 60MB Most secure No shell, hard to debug

Recommendation: Start with slim. Only use alpine if binary compatibility isn't an issue.

Security Hardening

# Drop all capabilities
# (set in docker-compose / k8s, not Dockerfile)

# Scan image for vulnerabilities
# docker scout cves myapp:latest

# Use specific digest instead of tag
FROM python:3.12.4-slim-bookworm@sha256:abc123...

# Non-root user (shown above)
USER django

# Read-only filesystem where possible
# docker run --read-only --tmpfs /tmp

CI/CD: Build and Push

# .github/workflows/build.yml
name: Build and Push
on:
  push:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: docker/setup-buildx-action@v3
      - uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: |
            ghcr.io/${{ github.repository }}:${{ github.sha }}
            ghcr.io/${{ github.repository }}:latest
          cache-from: type=gha
          cache-to: type=gha,mode=max

Running Migrations

Don't put python manage.py migrate in your Dockerfile. Run it separately:

# Before starting new containers
docker-compose run --rm web python manage.py migrate

# Or as an entrypoint init container in k8s

Zero-Downtime Deployment

  1. Build new image, push to registry
  2. docker-compose up -d --no-deps web rolls containers one at a time
  3. Gunicorn --graceful-timeout 30 lets existing requests finish

Monitoring

Export metrics:

# pip install django-prometheus
INSTALLED_APPS = [..., 'django_prometheus']
MIDDLEWARE = [
    'django_prometheus.middleware.PrometheusBeforeMiddleware',
    ...,
    'django_prometheus.middleware.PrometheusAfterMiddleware',
]

Expose /metrics and scrape with Prometheus + Grafana.

Summary

A production Django Docker image should be: - Multi-stage built (small runtime image) - Running as non-root - Health-checked - Cache-optimized for fast rebuilds - Scanned for vulnerabilities - Versioned (never deploy :latest) - Served by Gunicorn/uWSGI behind nginx

Get these right and deployments become boring — which is exactly what you want in production.