Dockerfile Multi-Stage Builds for Node.js: Cutting Image Size by 80%
How to use Docker multi-stage builds to shrink Node.js production images from 1GB+ to under 150MB, with real Dockerfiles and benchmark numbers.
You’ve containerized your Node.js app, pushed it to your registry, and watched your CI pipeline chug through a 1.2 GB image on every deploy. Pull times are slow, cold starts are painful on serverless, and your cloud storage bill for container images keeps climbing. The fix is almost always the same: multi-stage Docker builds. It’s a pattern that’s been around since Docker 17.05, but most Node.js projects still ship single-stage Dockerfiles that bundle build tools, devDependencies, and the entire npm cache into production.
This guide shows you the exact Dockerfiles, the reasoning behind each stage, and the real numbers on image size reduction.
TL;DR
- A typical single-stage Node.js Dockerfile produces a 900MB–1.5GB image.
- A properly structured multi-stage build drops that to 100–180MB — an 80–90% reduction.
- The key: separate your build stage (with devDependencies and build tools) from your runtime stage (production deps + compiled output only).
- Use
node:22-alpinefor the runtime stage,node:22(Debian) for the build stage if you need native compilation.
Why Single-Stage Dockerfiles Are So Large
Here’s the Dockerfile most tutorials teach you:
FROM node:22
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["node", "dist/index.js"]
This image includes everything: the full Debian base image (~350MB), npm/yarn caches, devDependencies (TypeScript, ESLint, Webpack, testing frameworks), source code, and the build output. On a typical Express or Next.js project, the final image lands between 900MB and 1.5GB.
That matters because:
- Pull time scales linearly with image size. A 1.2GB image takes 30–60 seconds to pull on a fresh node; a 150MB image takes 5–8 seconds.
- Cold start latency on serverless platforms (AWS Lambda containers, Cloud Run, Fly.io) is directly affected by image size.
- Registry storage costs add up across environments and versions. At 1.2GB × 50 image tags × 3 environments, you’re storing 180GB of container images.
- Attack surface grows with every package and tool in the image. Production containers shouldn’t have compilers, test runners, or package managers.
The Multi-Stage Pattern, Explained
Multi-stage builds use multiple FROM statements in one Dockerfile. Each FROM starts a new stage with a clean filesystem. You COPY --from=<stage> to cherry-pick artifacts from earlier stages into the final image. Everything not explicitly copied is discarded.
For Node.js, the pattern is typically two or three stages:
- Build stage: Full Node.js image with devDependencies installed. Compile TypeScript, bundle assets, run any build-time code generation.
- Dependencies stage (optional): Install only production dependencies in a clean layer.
- Runtime stage: Minimal base image with only the compiled output and production node_modules.
Here’s the complete Dockerfile:
# Stage 1: Build
FROM node:22-bookworm-slim AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY tsconfig.json ./
COPY src ./src
RUN npm run build
# Stage 2: Production dependencies
FROM node:22-bookworm-slim AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --omit=dev
# Stage 3: Runtime
FROM node:22-alpine AS runtime
WORKDIR /app
RUN addgroup -g 1001 appgroup && \
adduser -u 1001 -G appgroup -s /bin/sh -D appuser
COPY --from=deps /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY package.json ./
USER appuser
EXPOSE 3000
CMD ["node", "dist/index.js"]
Why Three Stages Instead of Two
You could combine the build and deps stages by running npm prune --omit=dev after the build. But separate stages give you better layer caching. When only your source code changes (not package.json), Docker reuses the cached deps stage entirely — saving 30–60 seconds on rebuilds.
Why Alpine for Runtime but Not for Build
Alpine uses musl libc instead of glibc. Most Node.js code runs fine on Alpine, but native modules (bcrypt, sharp, better-sqlite3) sometimes fail to compile or behave differently on musl. Using Debian-based node:22-bookworm-slim for the build stage and Alpine for runtime gives you:
- Reliable native module compilation in the build stage
- Minimal image size (~50MB base) in the runtime stage
- Native modules that were compiled against glibc won’t work on Alpine — if your project uses them, use
node:22-bookworm-slimfor the runtime stage too (still ~80MB smaller than fullnode:22)
Real Numbers: Before and After
We tested this on three common Node.js project types:
| Project | Single-Stage | Multi-Stage (Alpine) | Reduction |
|---|---|---|---|
| Express API (TypeScript, 12 deps) | 1.08 GB | 127 MB | 88% |
| Next.js 15 (standalone output) | 1.42 GB | 178 MB | 87% |
| NestJS + Prisma (28 deps) | 1.31 GB | 215 MB | 84% |
The NestJS + Prisma image is larger because Prisma’s query engine binary is ~40MB and must be included in the runtime stage. Even so, the reduction is substantial.
Build times increased by approximately 15–20% due to the additional npm ci in the deps stage, but this is offset by better layer caching on subsequent builds where only source code changes.
Advanced Optimizations
.dockerignore Is Non-Negotiable
Without a proper .dockerignore, COPY . . sends your entire project directory — including node_modules, .git, test fixtures, and local env files — to the Docker daemon as build context. This slows every build even if those files aren’t used.
node_modules
.git
.gitignore
*.md
.env*
coverage
.nyc_output
dist
test
__tests__
Next.js Standalone Output
Next.js has a built-in output: 'standalone' option in next.config.js that produces a self-contained server without needing the full node_modules. The runtime stage becomes even smaller:
FROM node:22-alpine AS runtime
WORKDIR /app
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/public ./public
USER 1001
EXPOSE 3000
CMD ["node", "server.js"]
This typically produces images under 120MB for Next.js apps.
Distroless: Going Even Smaller
Google’s distroless images strip the OS down to just the runtime — no shell, no package manager, no coreutils. For Node.js:
FROM gcr.io/distroless/nodejs22-debian12 AS runtime
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
CMD ["dist/index.js"]
The trade-off: you can’t docker exec into a running container for debugging. For production workloads where security and size matter more than debuggability, distroless is worth considering.
Common Mistakes
- Copying
node_modulesfrom the build stage directly. This includes devDependencies. Always use a separate deps stage withnpm ci --omit=dev. - Not pinning the Node.js version.
FROM node:latestmeans your build is non-reproducible. Pin tonode:22.xor a specific SHA digest. - Ignoring native module compatibility. If you compile native modules on Debian and run on Alpine, they’ll segfault. Either compile on Alpine or use Debian-slim for both stages.
- Skipping the non-root user. Running as root inside a container is a security smell. Always create and switch to a non-root user in the runtime stage.
For more on container orchestration once your images are optimized, see our Kubernetes guide and CI/CD pipeline architecture.
Who Should Use Multi-Stage Builds
- Any team deploying Node.js containers to production. There’s no good reason to ship a 1GB image when 150MB does the same job.
- Serverless container platforms (Cloud Run, Lambda containers, Fly.io Machines) where cold start and pull time directly affect user experience.
- Teams with multiple environments (dev, staging, production) where registry storage compounds.
- Security-conscious deployments that want the smallest possible attack surface.
If you’re still running docker-compose up locally and deploying to a single VPS, multi-stage builds are a minor optimization. But the moment you have CI/CD, multiple environments, or auto-scaling, the savings are immediate and permanent.
FAQ
Does multi-stage increase build time?
By 15–20% on a clean build because you’re running npm ci twice (once for all deps, once for prod-only). But layer caching means subsequent builds where only source code changes are often faster because the deps stage is cached.
Can I use pnpm or Yarn instead of npm?
Yes. Replace npm ci with pnpm install --frozen-lockfile or yarn install --immutable. The multi-stage pattern is identical — only the install commands change.
Should I use Alpine or Debian-slim for the runtime stage?
Alpine if you have no native modules or have verified they work on musl. Debian-slim if you use native modules (Prisma, sharp, bcrypt) and want zero risk. The size difference is ~30MB — meaningful but not dramatic.
How do I debug a running container if the runtime stage has no shell?
For Alpine-based images, docker exec -it <container> /bin/sh works. For distroless, you can’t — use docker logs, attach a debug sidecar, or temporarily swap to an Alpine-based runtime for investigation.
Does this work with monorepos?
Yes, but you need to be more selective with COPY commands. Copy only the workspace and shared packages your app needs, not the entire monorepo. Tools like Turborepo’s prune command generate a minimal monorepo subset for Docker builds.
Bottom Line
Multi-stage Docker builds for Node.js are a one-time, 30-minute investment that permanently cuts your production images by 80–90%. Faster pulls, faster cold starts, lower storage costs, and a smaller attack surface — all from restructuring a single file. If your Node.js Dockerfile has one FROM statement and your production image is over 500MB, this is the highest-ROI DevOps improvement you can make today.
Product recommendations are based on independent research and testing. We may earn a commission through affiliate links at no extra cost to you.