Build lean, secure, production-ready Django Docker images. Multi-stage builds, dependency caching, non-root users, compiled static files, and health checks that shrink images from 1.2GB to 150MB.
A "works on my machine" Dockerfile and a production-grade one are worlds apart. Here's how to build Django images that are small, fast, secure, and actually run in production.
You've seen this:
FROM python:3.12
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Problems:
- 1.2GB image (full Debian + dev tools)
- Running as root
- No build caching — every source change rebuilds everything
- runserver is not for production
- No static file collection
- No health checks
- .git, .env, node_modules leak into the image
# syntax=docker/dockerfile:1.6
# --- Stage 1: Build dependencies ---
FROM python:3.12-slim AS builder
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
PIP_NO_CACHE_DIR=1 \
PIP_DISABLE_PIP_VERSION_CHECK=1
# Install build deps
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /build
# Install Python deps into a venv
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
COPY requirements.txt .
RUN pip install -r requirements.txt
# --- Stage 2: Final runtime image ---
FROM python:3.12-slim AS runtime
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
PATH="/opt/venv/bin:$PATH" \
DJANGO_SETTINGS_MODULE=config.settings.production
# Runtime deps only (no build tools)
RUN apt-get update && apt-get install -y --no-install-recommends \
libpq5 \
curl \
&& rm -rf /var/lib/apt/lists/*
# Copy venv from builder
COPY --from=builder /opt/venv /opt/venv
# Create non-root user
RUN groupadd -r django && useradd -r -g django django
WORKDIR /app
COPY --chown=django:django . .
# Collect static files
RUN python manage.py collectstatic --noinput
# Switch to non-root
USER django
EXPOSE 8000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8000/health/ || exit 1
CMD ["gunicorn", "config.wsgi:application", \
"--bind", "0.0.0.0:8000", \
"--workers", "3", \
"--access-logfile", "-", \
"--error-logfile", "-"]
Result: ~150MB image, runs as non-root, proper caching, production-grade server.
Without this, COPY . . includes your .git, .venv, __pycache__, secrets:
.git
.gitignore
.dockerignore
Dockerfile*
docker-compose*.yml
__pycache__
*.pyc
*.pyo
*.pyd
.Python
.venv
venv
env
node_modules
npm-debug.log
yarn-error.log
.env
.env.*
!.env.example
*.sqlite3
*.log
.pytest_cache
.coverage
htmlcov
.tox
README.md
docs/
tests/
Docker caches layers. Structure your Dockerfile to maximize cache hits:
# Good: deps installed before code copy — code changes don't bust pip cache
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
Use --mount=type=cache for even better caching:
# syntax=docker/dockerfile:1.6
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r requirements.txt
# docker-compose.yml
services:
web:
build:
context: .
target: builder # use builder stage for dev
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
env_file: .env
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
db:
image: postgres:16-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: myapp
POSTGRES_USER: myapp
POSTGRES_PASSWORD: dev
healthcheck:
test: ["CMD-SHELL", "pg_isready -U myapp"]
interval: 10s
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
celery:
build: .
command: celery -A config worker -l info
env_file: .env
depends_on:
- redis
volumes:
postgres_data:
redis_data:
# docker-compose.prod.yml
services:
web:
image: myapp:${VERSION:-latest}
restart: unless-stopped
ports:
- "127.0.0.1:8000:8000"
env_file: .env.prod
depends_on:
- db
- redis
nginx:
image: nginx:alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- ./certs:/etc/nginx/certs:ro
- static:/app/staticfiles:ro
depends_on:
- web
volumes:
static:
Don't bake secrets into images. Use:
env_file:RUN --mount=type=secret,id=pip_conf,target=/root/.pip/pip.conf \
pip install -r requirements.txt
docker build --secret id=pip_conf,src=~/.pip/pip.conf .
| Approach | Size | Pros | Cons |
|---|---|---|---|
| python:3.12 | 1.2GB | Most compatible | Huge |
| python:3.12-slim | 150MB | Good balance | Need to install some libs |
| python:3.12-alpine | 80MB | Tiny | Compilation issues with some wheels |
| distroless | 60MB | Most secure | No shell, hard to debug |
Recommendation: Start with slim. Only use alpine if binary compatibility isn't an issue.
# Drop all capabilities
# (set in docker-compose / k8s, not Dockerfile)
# Scan image for vulnerabilities
# docker scout cves myapp:latest
# Use specific digest instead of tag
FROM python:3.12.4-slim-bookworm@sha256:abc123...
# Non-root user (shown above)
USER django
# Read-only filesystem where possible
# docker run --read-only --tmpfs /tmp
# .github/workflows/build.yml
name: Build and Push
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: docker/build-push-action@v5
with:
context: .
push: true
tags: |
ghcr.io/${{ github.repository }}:${{ github.sha }}
ghcr.io/${{ github.repository }}:latest
cache-from: type=gha
cache-to: type=gha,mode=max
Don't put python manage.py migrate in your Dockerfile. Run it separately:
# Before starting new containers
docker-compose run --rm web python manage.py migrate
# Or as an entrypoint init container in k8s
docker-compose up -d --no-deps web rolls containers one at a time--graceful-timeout 30 lets existing requests finishExport metrics:
# pip install django-prometheus
INSTALLED_APPS = [..., 'django_prometheus']
MIDDLEWARE = [
'django_prometheus.middleware.PrometheusBeforeMiddleware',
...,
'django_prometheus.middleware.PrometheusAfterMiddleware',
]
Expose /metrics and scrape with Prometheus + Grafana.
A production Django Docker image should be:
- Multi-stage built (small runtime image)
- Running as non-root
- Health-checked
- Cache-optimized for fast rebuilds
- Scanned for vulnerabilities
- Versioned (never deploy :latest)
- Served by Gunicorn/uWSGI behind nginx
Get these right and deployments become boring — which is exactly what you want in production.