Docker Compose for Full-Stack Developers: Next.js, PostgreSQL, and Redis

One command to spin up your entire stack. Docker Compose lets you define your Next.js app, PostgreSQL database, and Redis cache as a single declarative unit. In this tutorial, you will build a production-grade development environment that any team member can start with docker compose up.
What You Will Learn
By the end of this tutorial, you will:
- Set up Docker Compose to orchestrate a multi-service full-stack application
- Containerize a Next.js 15 application with hot reloading in development
- Run PostgreSQL 16 with persistent volumes and automatic initialization
- Add Redis 7 as a caching layer with health checks
- Configure environment variables securely across services
- Write a multi-stage Dockerfile optimized for production builds
- Implement health checks and dependency ordering between services
- Create separate development and production Compose configurations
Prerequisites
Before starting, ensure you have:
- Docker Desktop installed and running (v4.25+) — download here
- Node.js 20+ installed locally for initial project setup
- Basic terminal knowledge — navigating directories, running commands
- Familiarity with Next.js — pages, API routes, Server Components
- A code editor — VS Code with the Docker extension recommended
Why Docker Compose for Full-Stack Development?
Every developer has experienced the "works on my machine" problem. Your Next.js app connects to a local PostgreSQL database, uses Redis for caching, and everything runs perfectly — until a teammate clones the repo and spends hours configuring their environment.
Docker Compose solves this by defining your entire application stack in a single docker-compose.yml file. Every service, every connection string, every port mapping — all declared once and reproducible everywhere.
Here is what makes Docker Compose essential in 2026:
- Consistent environments — development, staging, and production use identical service configurations
- One-command setup — new team members run
docker compose upand start coding immediately - Isolated databases — no conflicts between project databases running on the same machine
- Disposable environments — tear down and rebuild your entire stack in seconds
- CI/CD integration — the same Compose file works in GitHub Actions, GitLab CI, and local development
Project Overview
You will build a task management API with the following architecture:
┌─────────────────────────────────────────┐
│ Docker Compose │
│ │
│ ┌───────────┐ ┌──────────┐ ┌──────┐ │
│ │ Next.js │→ │PostgreSQL│ │Redis │ │
│ │ :3000 │→ │ :5432 │ │:6379 │ │
│ └───────────┘ └──────────┘ └──────┘ │
│ ↑ ↑ ↑ │
│ └──────── Network ─────────┘ │
└─────────────────────────────────────────┘
- Next.js 15 — App Router with API routes for the task CRUD API
- PostgreSQL 16 — Primary data store for tasks and users
- Redis 7 — Caching layer for frequently accessed data
Step 1: Initialize the Next.js Project
Start by creating a fresh Next.js application:
npx create-next-app@latest docker-fullstack --typescript --tailwind --app --src-dir --eslint
cd docker-fullstackInstall the database and caching dependencies:
npm install pg redis
npm install -D @types/pgYour project structure should look like this:
docker-fullstack/
├── src/
│ ├── app/
│ │ ├── api/
│ │ ├── layout.tsx
│ │ └── page.tsx
│ └── lib/
├── package.json
├── tsconfig.json
└── next.config.ts
Step 2: Create the Dockerfile
Create a Dockerfile at the project root. This uses a multi-stage build to keep the final image small:
# Stage 1: Dependencies
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --only=production && cp -R node_modules /prod_deps
RUN npm ci
# Stage 2: Builder
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
# Stage 3: Runner (Production)
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT=3000
CMD ["node", "server.js"]For the production stage to work with standalone output, update next.config.ts:
import type { NextConfig } from "next";
const nextConfig: NextConfig = {
output: "standalone",
};
export default nextConfig;Step 3: Create the Development Dockerfile
For development, you need hot reloading and source mounting. Create Dockerfile.dev:
FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]This simpler Dockerfile skips the multi-stage build — in development, speed and live reloading matter more than image size.
Step 4: Write the Docker Compose File
Create docker-compose.yml at the project root:
services:
# Next.js Application
app:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- ./src:/app/src
- ./public:/app/public
- /app/node_modules
- /app/.next
environment:
- DATABASE_URL=postgresql://postgres:postgres@db:5432/taskdb
- REDIS_URL=redis://cache:6379
- NODE_ENV=development
depends_on:
db:
condition: service_healthy
cache:
condition: service_healthy
networks:
- app-network
# PostgreSQL Database
db:
image: postgres:16-alpine
ports:
- "5432:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: taskdb
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
networks:
- app-network
# Redis Cache
cache:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 5s
retries: 5
command: redis-server --appendonly yes
networks:
- app-network
volumes:
postgres_data:
redis_data:
networks:
app-network:
driver: bridgeLet us break down the key concepts:
Volume Mounts
volumes:
- ./src:/app/src # Mount source for hot reloading
- ./public:/app/public # Mount public assets
- /app/node_modules # Anonymous volume — keeps container's node_modules
- /app/.next # Anonymous volume — keeps container's build cacheThe anonymous volumes for node_modules and .next prevent your local files from overriding the container's installed dependencies. This is critical — without them, architecture mismatches (macOS vs Linux) would break native modules.
Health Checks
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5Health checks ensure PostgreSQL is actually ready to accept connections before Next.js tries to connect. Without this, your app would crash on startup because the database port might be open but the server is still initializing.
Dependency Ordering
depends_on:
db:
condition: service_healthy
cache:
condition: service_healthyUsing condition: service_healthy instead of just depends_on: [db] waits for the health check to pass, not just for the container to start.
Step 5: Database Initialization
Create init.sql to set up the database schema on first run:
-- Create tasks table
CREATE TABLE IF NOT EXISTS tasks (
id SERIAL PRIMARY KEY,
title VARCHAR(255) NOT NULL,
description TEXT,
status VARCHAR(20) DEFAULT 'pending' CHECK (status IN ('pending', 'in_progress', 'completed')),
priority INTEGER DEFAULT 0 CHECK (priority BETWEEN 0 AND 3),
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
-- Create index for common queries
CREATE INDEX idx_tasks_status ON tasks(status);
CREATE INDEX idx_tasks_priority ON tasks(priority);
-- Insert sample data
INSERT INTO tasks (title, description, status, priority) VALUES
('Set up Docker environment', 'Configure Docker Compose for the project', 'completed', 3),
('Design database schema', 'Create PostgreSQL tables and indexes', 'in_progress', 2),
('Implement caching layer', 'Add Redis caching for API responses', 'pending', 1),
('Write API endpoints', 'Build CRUD operations for tasks', 'pending', 2),
('Add authentication', 'Implement JWT-based auth', 'pending', 1);
-- Create updated_at trigger
CREATE OR REPLACE FUNCTION update_modified_column()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = NOW();
RETURN NEW;
END;
$$ language 'plpgsql';
CREATE TRIGGER update_tasks_modtime
BEFORE UPDATE ON tasks
FOR EACH ROW
EXECUTE FUNCTION update_modified_column();This file runs automatically when the PostgreSQL container starts for the first time (via the docker-entrypoint-initdb.d mount).
Step 6: Database Connection Module
Create src/lib/db.ts to manage the PostgreSQL connection pool:
import { Pool } from "pg";
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000,
});
pool.on("error", (err) => {
console.error("Unexpected database pool error:", err);
});
export async function query<T>(text: string, params?: unknown[]): Promise<T[]> {
const result = await pool.query(text, params);
return result.rows as T[];
}
export async function queryOne<T>(
text: string,
params?: unknown[]
): Promise<T | null> {
const rows = await query<T>(text, params);
return rows[0] || null;
}
export default pool;Step 7: Redis Connection Module
Create src/lib/redis.ts for the caching layer:
import { createClient } from "redis";
const redis = createClient({
url: process.env.REDIS_URL,
});
redis.on("error", (err) => {
console.error("Redis connection error:", err);
});
redis.on("connect", () => {
console.log("Connected to Redis");
});
// Connect on first import
if (!redis.isOpen) {
redis.connect();
}
export async function getCache<T>(key: string): Promise<T | null> {
const data = await redis.get(key);
if (!data) return null;
return JSON.parse(data) as T;
}
export async function setCache(
key: string,
value: unknown,
ttlSeconds = 60
): Promise<void> {
await redis.set(key, JSON.stringify(value), { EX: ttlSeconds });
}
export async function invalidateCache(pattern: string): Promise<void> {
const keys = await redis.keys(pattern);
if (keys.length > 0) {
await redis.del(keys);
}
}
export default redis;Step 8: Build the API Routes
Create the task API with full CRUD operations. Start with src/app/api/tasks/route.ts:
import { NextRequest, NextResponse } from "next/server";
import { query } from "@/lib/db";
import { getCache, setCache, invalidateCache } from "@/lib/redis";
interface Task {
id: number;
title: string;
description: string | null;
status: string;
priority: number;
created_at: string;
updated_at: string;
}
// GET /api/tasks
export async function GET(request: NextRequest) {
const { searchParams } = new URL(request.url);
const status = searchParams.get("status");
// Check cache first
const cacheKey = `tasks:${status || "all"}`;
const cached = await getCache<Task[]>(cacheKey);
if (cached) {
return NextResponse.json({ data: cached, source: "cache" });
}
// Query database
let tasks: Task[];
if (status) {
tasks = await query<Task>(
"SELECT * FROM tasks WHERE status = $1 ORDER BY priority DESC, created_at DESC",
[status]
);
} else {
tasks = await query<Task>(
"SELECT * FROM tasks ORDER BY priority DESC, created_at DESC"
);
}
// Cache for 30 seconds
await setCache(cacheKey, tasks, 30);
return NextResponse.json({ data: tasks, source: "database" });
}
// POST /api/tasks
export async function POST(request: NextRequest) {
const body = await request.json();
const { title, description, priority } = body;
if (!title) {
return NextResponse.json(
{ error: "Title is required" },
{ status: 400 }
);
}
const task = await query<Task>(
"INSERT INTO tasks (title, description, priority) VALUES ($1, $2, $3) RETURNING *",
[title, description || null, priority || 0]
);
// Invalidate task caches
await invalidateCache("tasks:*");
return NextResponse.json({ data: task[0] }, { status: 201 });
}Now create the dynamic route at src/app/api/tasks/[id]/route.ts:
import { NextRequest, NextResponse } from "next/server";
import { query, queryOne } from "@/lib/db";
import { invalidateCache } from "@/lib/redis";
interface Task {
id: number;
title: string;
description: string | null;
status: string;
priority: number;
created_at: string;
updated_at: string;
}
// GET /api/tasks/:id
export async function GET(
_request: NextRequest,
{ params }: { params: Promise<{ id: string }> }
) {
const { id } = await params;
const task = await queryOne<Task>("SELECT * FROM tasks WHERE id = $1", [id]);
if (!task) {
return NextResponse.json({ error: "Task not found" }, { status: 404 });
}
return NextResponse.json({ data: task });
}
// PATCH /api/tasks/:id
export async function PATCH(
request: NextRequest,
{ params }: { params: Promise<{ id: string }> }
) {
const { id } = await params;
const body = await request.json();
const { title, description, status, priority } = body;
const task = await queryOne<Task>(
`UPDATE tasks
SET title = COALESCE($1, title),
description = COALESCE($2, description),
status = COALESCE($3, status),
priority = COALESCE($4, priority)
WHERE id = $5
RETURNING *`,
[title, description, status, priority, id]
);
if (!task) {
return NextResponse.json({ error: "Task not found" }, { status: 404 });
}
await invalidateCache("tasks:*");
return NextResponse.json({ data: task });
}
// DELETE /api/tasks/:id
export async function DELETE(
_request: NextRequest,
{ params }: { params: Promise<{ id: string }> }
) {
const { id } = await params;
const task = await queryOne<Task>(
"DELETE FROM tasks WHERE id = $1 RETURNING *",
[id]
);
if (!task) {
return NextResponse.json({ error: "Task not found" }, { status: 404 });
}
await invalidateCache("tasks:*");
return NextResponse.json({ data: task });
}Step 9: Add a Health Check Endpoint
Create src/app/api/health/route.ts to verify all services are connected:
import { NextResponse } from "next/server";
import pool from "@/lib/db";
import redis from "@/lib/redis";
export async function GET() {
const health: Record<string, string> = {
status: "ok",
timestamp: new Date().toISOString(),
};
// Check PostgreSQL
try {
await pool.query("SELECT 1");
health.database = "connected";
} catch {
health.database = "disconnected";
health.status = "degraded";
}
// Check Redis
try {
await redis.ping();
health.cache = "connected";
} catch {
health.cache = "disconnected";
health.status = "degraded";
}
const statusCode = health.status === "ok" ? 200 : 503;
return NextResponse.json(health, { status: statusCode });
}Step 10: Launch the Stack
Everything is in place. Start the entire application with one command:
docker compose up --buildYou should see output showing all three services starting:
[+] Running 3/3
✔ Container docker-fullstack-cache-1 Healthy
✔ Container docker-fullstack-db-1 Healthy
✔ Container docker-fullstack-app-1 Started
Test the endpoints:
# Health check
curl http://localhost:3000/api/health
# List all tasks
curl http://localhost:3000/api/tasks
# Create a new task
curl -X POST http://localhost:3000/api/tasks \
-H "Content-Type: application/json" \
-d '{"title": "Learn Docker Compose", "description": "Follow the tutorial", "priority": 3}'
# Filter by status
curl http://localhost:3000/api/tasks?status=pending
# Update a task
curl -X PATCH http://localhost:3000/api/tasks/1 \
-H "Content-Type: application/json" \
-d '{"status": "completed"}'
# Delete a task
curl -X DELETE http://localhost:3000/api/tasks/1Step 11: Development Workflow Tips
Viewing Logs
# All services
docker compose logs -f
# Single service
docker compose logs -f app
# Last 50 lines
docker compose logs --tail 50 dbAccessing the Database Shell
docker compose exec db psql -U postgres -d taskdbOnce inside psql, you can run queries directly:
SELECT * FROM tasks;
\dt -- list tables
\d tasks -- describe table schemaAccessing Redis CLI
docker compose exec cache redis-cliUseful Redis commands for debugging:
KEYS * # List all keys
GET tasks:all # View cached data
TTL tasks:all # Check time-to-live
FLUSHALL # Clear all cache
Rebuilding After Dependency Changes
When you add new npm packages, rebuild the app container:
docker compose up --build appResetting the Database
To wipe the database and start fresh:
docker compose down -v # -v removes volumes
docker compose up --buildStep 12: Production Configuration
Create docker-compose.prod.yml for production overrides:
services:
app:
build:
context: .
dockerfile: Dockerfile
volumes: [] # No source mounting in production
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://${DB_USER}:${DB_PASSWORD}@db:5432/${DB_NAME}
- REDIS_URL=redis://cache:6379
restart: always
db:
environment:
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
restart: always
cache:
command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD}
restart: alwaysCreate a .env.production file (never commit this):
DB_USER=taskapp
DB_PASSWORD=a-very-strong-password-here
DB_NAME=taskdb_prod
REDIS_PASSWORD=another-strong-passwordDeploy with:
docker compose -f docker-compose.yml -f docker-compose.prod.yml --env-file .env.production up -d --buildThe -f flags merge both Compose files, with the production file overriding development values.
Step 13: Add a .dockerignore File
Create .dockerignore to keep your images clean:
node_modules
.next
.git
.gitignore
*.md
docker-compose*.yml
.env*
.vscode
coverage
This prevents large directories and sensitive files from being copied into your Docker image, reducing build time and image size.
Step 14: Monitoring with Docker Compose
Add basic monitoring by extending your Compose file with a stats command:
# Real-time resource usage
docker compose stats
# Output:
# NAME CPU % MEM USAGE / LIMIT NET I/O
# app-1 0.50% 245MiB / 8GiB 1.2kB / 890B
# db-1 0.10% 45MiB / 8GiB 500B / 200B
# cache-1 0.05% 12MiB / 8GiB 300B / 100BFor production environments, you can add resource limits:
services:
app:
deploy:
resources:
limits:
cpus: "1.0"
memory: 512M
reservations:
cpus: "0.5"
memory: 256MTroubleshooting
Port Already in Use
Error: bind: address already in use
Another process is using the port. Find and stop it:
# Find process using port 5432
lsof -i :5432
# Or change the port mapping in docker-compose.yml
ports:
- "5433:5432" # Map to 5433 locallyDatabase Connection Refused
If the app starts before PostgreSQL is ready, you will see:
Error: connect ECONNREFUSED 172.18.0.2:5432
This should not happen with health checks configured, but if it does, ensure depends_on uses condition: service_healthy.
Volume Permission Issues on Linux
# Fix permissions for PostgreSQL data
sudo chown -R 999:999 ./postgres_data
# Or use named volumes (recommended, already used in this tutorial)Hot Reloading Not Working
Ensure your volume mounts include the source directory:
volumes:
- ./src:/app/srcAnd check that the Next.js dev server is watching for changes. If using Docker Desktop on macOS, file watching should work automatically through gRPC FUSE.
Next Steps
Now that your Docker Compose environment is running, consider these enhancements:
- Add Prisma or Drizzle ORM — replace raw SQL with a type-safe ORM for migrations and schema management
- Implement authentication — add a user table and JWT-based auth to protect API routes
- Set up Nginx — add a reverse proxy service for SSL termination and load balancing
- Add pgAdmin — include a database management UI as another Compose service
- Configure CI/CD — use the same Compose file in GitHub Actions for integration testing
- Add Adminer — a lightweight database management tool as a Compose service
Conclusion
Docker Compose transforms full-stack development by eliminating environment inconsistencies. In this tutorial, you built a complete application stack with:
- Next.js 15 serving the API with hot reloading in development
- PostgreSQL 16 with persistent storage, automatic initialization, and health checks
- Redis 7 for response caching with TTL-based invalidation
- Multi-stage Dockerfile optimized for both development and production
- Separate Compose configurations for development and production environments
The entire stack starts with a single docker compose up command. Every team member gets an identical environment, and the same configuration extends to CI/CD pipelines and production deployment.
The key patterns you learned — health checks, dependency ordering, volume mounting, and multi-stage builds — apply to any Docker Compose project, regardless of the technology stack.
Discuss Your Project with Us
We're here to help with your web development needs. Schedule a call to discuss your project and how we can assist you.
Let's find the best solutions for your needs.
Related Articles

Build a Full-Stack App with Drizzle ORM and Next.js 15: Type-Safe Database from Zero to Production
Learn how to build a type-safe full-stack application using Drizzle ORM with Next.js 15. This hands-on tutorial covers schema design, migrations, Server Actions, CRUD operations, and deployment with PostgreSQL.

Building Production-Ready REST APIs with FastAPI, PostgreSQL, and Docker
Learn how to build, test, and deploy a production-grade REST API using Python's FastAPI framework with PostgreSQL, SQLAlchemy, Alembic migrations, and Docker Compose — from zero to deployment.

Deploy a Next.js Application with Docker and CI/CD in Production
Learn how to containerize your Next.js application with Docker, set up a CI/CD pipeline with GitHub Actions, and deploy to production on a VPS. A complete guide from development to automated deployment.