Deploy a Next.js Application with Docker and CI/CD in Production

This tutorial guides you step by step through containerizing a Next.js application with Docker, setting up a CI/CD pipeline with GitHub Actions, and deploying automatically to production on a Linux VPS. By the end, each push to your main branch will trigger an automatic deployment.
Learning Objectives
By the end of this tutorial, you will be able to:
- Write an optimized multi-stage Dockerfile for Next.js
- Configure Docker Compose for local development and production
- Set up a complete CI/CD pipeline with GitHub Actions
- Deploy automatically to a VPS server via SSH
- Manage environment variables securely
- Configure an Nginx reverse proxy with SSL
Prerequisites
Before starting, make sure you have:
- Node.js 20+ installed locally
- Docker Desktop installed and running (install Docker)
- A GitHub account with a repository containing a Next.js application
- A Linux VPS (Ubuntu 22.04+ recommended) with SSH access
- A domain name pointing to your VPS (optional but recommended)
- Basic knowledge of Next.js and the command line
What You Will Build
We will set up a complete deployment infrastructure:
- Multi-stage Dockerfile — Optimized Docker image (~150 MB instead of ~1 GB)
- Docker Compose — Service orchestration (app + database + cache)
- CI/CD Pipeline — Automatic tests, build, and deployment via GitHub Actions
- Reverse proxy — Nginx with Let's Encrypt SSL certificate
- Monitoring — Healthchecks and structured logs
Step 1: Prepare the Next.js Project
Start by verifying that your Next.js project works correctly locally.
# Check the Node.js version
node --version # v20.x.x required
# Install dependencies
npm install
# Test the production build
npm run build
# Verify that the server starts
npm startIf you do not have a Next.js project yet, create one quickly with npx create-next-app@latest my-app --typescript. This tutorial works with Next.js 14 and 15.
Add a healthcheck script
Create a file app/api/health/route.ts so Docker can verify the state of your application:
// app/api/health/route.ts
import { NextResponse } from "next/server";
export async function GET() {
return NextResponse.json({
status: "healthy",
timestamp: new Date().toISOString(),
uptime: process.uptime(),
});
}This endpoint will be used by Docker and your CI/CD pipeline to validate that the application is working.
Step 2: Write the Multi-Stage Dockerfile
The secret to a performant Docker image for Next.js is the multi-stage build. This separates the build tools (which are heavy) from the final image (which is lightweight).
Create a Dockerfile at the root of your project:
# =============================================================================
# Stage 1: Base image with dependencies
# =============================================================================
FROM node:20-alpine AS base
# Install libc6-compat for native modules
RUN apk add --no-cache libc6-compat
WORKDIR /app
# =============================================================================
# Stage 2: Install dependencies
# =============================================================================
FROM base AS deps
# Copy dependency files
COPY package.json package-lock.json* ./
# Install all dependencies (including devDependencies for the build)
RUN npm ci
# =============================================================================
# Stage 3: Build the application
# =============================================================================
FROM base AS builder
WORKDIR /app
# Copy installed dependencies
COPY --from=deps /app/node_modules ./node_modules
# Copy source code
COPY . .
# Disable Next.js telemetry during build
ENV NEXT_TELEMETRY_DISABLED=1
# Production build
RUN npm run build
# =============================================================================
# Stage 4: Production image (lightweight)
# =============================================================================
FROM node:20-alpine AS runner
WORKDIR /app
# Production mode
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
# Create a non-root user for security
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# Copy public files
COPY --from=builder /app/public ./public
# Copy the Next.js standalone build
# Standalone mode copies only the necessary files
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
# Use the non-root user
USER nextjs
# Expose the port
EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"
# Docker healthcheck
HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1
# Start the application
CMD ["node", "server.js"]Configure standalone mode in Next.js
For the Dockerfile to work, you must enable the standalone output in your Next.js configuration:
// next.config.js (or next.config.mjs)
/** @type {import('next').NextConfig} */
const nextConfig = {
output: "standalone",
};
module.exports = nextConfig;The standalone mode is essential for Docker. Without it, the final image will require the entire node_modules folder, significantly increasing its size. With standalone, Next.js copies only the files needed for execution.
Create the .dockerignore file
Create a .dockerignore file to avoid copying unnecessary files into the image:
# .dockerignore
node_modules
.next
.git
.gitignore
*.md
docker-compose*.yml
.env*.local
.vscode
.idea
coverage
.huskyTest the Docker build locally
# Build the image
docker build -t my-nextjs-app .
# Check the image size
docker images my-nextjs-app
# REPOSITORY TAG SIZE
# my-nextjs-app latest ~150MB
# Run the container
docker run -p 3000:3000 my-nextjs-app
# Test in another terminal
curl http://localhost:3000/api/healthIf you get {"status":"healthy"}, your Docker image is working correctly. The size should be between 130 and 200 MB depending on your dependencies.
Step 3: Configure Docker Compose
Docker Compose allows you to orchestrate multiple services. We will configure two files: one for development and one for production.
Docker Compose for development
Create docker-compose.dev.yml:
# docker-compose.dev.yml
services:
app:
build:
context: .
dockerfile: Dockerfile
target: deps # Use only the dependencies stage
command: npm run dev
ports:
- "3000:3000"
volumes:
- .:/app
- /app/node_modules # Exclude node_modules from the volume
environment:
- NODE_ENV=development
- DATABASE_URL=postgresql://postgres:password@db:5432/myapp
- REDIS_URL=redis://cache:6379
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: myapp
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
cache:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
volumes:
postgres_data:
redis_data:Docker Compose for production
Create docker-compose.prod.yml:
# docker-compose.prod.yml
services:
app:
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
ports:
- "3000:3000"
env_file:
- .env.production
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/api/health"]
interval: 30s
timeout: 10s
start_period: 40s
retries: 3
deploy:
resources:
limits:
memory: 512M
cpus: "0.5"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
db:
image: postgres:16-alpine
restart: unless-stopped
env_file:
- .env.production
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
memory: 256M
cache:
image: redis:7-alpine
restart: unless-stopped
command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru
volumes:
- redis_data:/data
deploy:
resources:
limits:
memory: 128M
volumes:
postgres_data:
redis_data:Start the development environment
# Start all services
docker compose -f docker-compose.dev.yml up -d
# View logs
docker compose -f docker-compose.dev.yml logs -f app
# Stop services
docker compose -f docker-compose.dev.yml downStep 4: Configure the CI/CD Pipeline with GitHub Actions
This is the most important part. We will create a pipeline that:
- Tests the code on every pull request
- Builds the Docker image
- Deploys automatically to the VPS on every merge to
main
Create the main workflow
Create the file .github/workflows/deploy.yml:
# .github/workflows/deploy.yml
name: CI/CD Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
# =====================================================
# Job 1: Linting and tests
# =====================================================
test:
name: Tests and quality
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm"
- name: Install dependencies
run: npm ci
- name: Linting
run: npm run lint
- name: Unit tests
run: npm test -- --passWithNoTests
- name: Verification build
run: npm run build
# =====================================================
# Job 2: Build and push Docker image
# =====================================================
build:
name: Docker Build
runs-on: ubuntu-latest
needs: test
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
permissions:
contents: read
packages: write
outputs:
image-tag: ${{ steps.meta.outputs.tags }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Login to GitHub registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract Docker metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=sha,prefix=
type=raw,value=latest
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
# =====================================================
# Job 3: Deploy to VPS
# =====================================================
deploy:
name: Production deployment
runs-on: ubuntu-latest
needs: build
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
environment:
name: production
url: https://${{ vars.DOMAIN_NAME }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Deploy via SSH
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.VPS_HOST }}
username: ${{ secrets.VPS_USER }}
key: ${{ secrets.VPS_SSH_KEY }}
script: |
# Navigate to the project directory
cd /opt/apps/mon-app
# Login to the GitHub registry
echo ${{ secrets.GITHUB_TOKEN }} | docker login ghcr.io -u ${{ github.actor }} --password-stdin
# Pull the latest image
docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
# Update the service with zero downtime
docker compose -f docker-compose.prod.yml up -d --no-deps --wait app
# Clean up old images
docker image prune -f
# Verify the healthcheck
sleep 10
curl -f http://localhost:3000/api/health || exit 1
- name: Success notification
if: success()
run: echo "Deployment successful on ${{ vars.DOMAIN_NAME }}"
- name: Failure notification
if: failure()
run: echo "Deployment failed - check the logs"Configure GitHub secrets
Go to Settings > Secrets and variables > Actions in your GitHub repository and add:
| Secret | Description | Example |
|---|---|---|
VPS_HOST | IP address or domain of the VPS | 203.0.113.50 |
VPS_USER | SSH user | deploy |
VPS_SSH_KEY | Private SSH key | Contents of ~/.ssh/id_ed25519 |
And in Variables:
| Variable | Description | Example |
|---|---|---|
DOMAIN_NAME | Domain name | myapp.example.com |
Never store your SSH keys or tokens in source code. Use GitHub secrets exclusively for sensitive information. Generate a dedicated SSH key for deployment with ssh-keygen -t ed25519 -C "deploy@github-actions".
Step 5: Prepare the VPS Server
Connect to your VPS and prepare the environment.
Install Docker on the VPS
# Update the system
sudo apt update && sudo apt upgrade -y
# Install Docker via the official script
curl -fsSL https://get.docker.com | sudo sh
# Add the user to the docker group
sudo usermod -aG docker $USER
# Install Docker Compose (included in modern Docker Engine)
docker compose version
# Restart the session to apply permissions
exit
# ReconnectCreate the project structure
# Create the application directory
sudo mkdir -p /opt/apps/mon-app
sudo chown $USER:$USER /opt/apps/mon-app
cd /opt/apps/mon-app
# Create the production environment file
cat > .env.production << 'EOF'
NODE_ENV=production
DATABASE_URL=postgresql://postgres:YOUR_STRONG_PASSWORD@db:5432/myapp
REDIS_URL=redis://cache:6379
POSTGRES_USER=postgres
POSTGRES_PASSWORD=YOUR_STRONG_PASSWORD
POSTGRES_DB=myapp
EOF
# Protect the file
chmod 600 .env.productionCopy the Docker Compose file
# Copy docker-compose.prod.yml to the server from your local machine
scp docker-compose.prod.yml deploy@YOUR_VPS:/opt/apps/mon-app/Create a dedicated deployment user
# Create the deploy user
sudo adduser --disabled-password deploy
sudo usermod -aG docker deploy
# Configure the SSH key for deployment
sudo mkdir -p /home/deploy/.ssh
sudo cp ~/.ssh/authorized_keys /home/deploy/.ssh/
# Add the public key generated for GitHub Actions
echo "YOUR_ED25519_PUBLIC_KEY" | sudo tee -a /home/deploy/.ssh/authorized_keys
sudo chown -R deploy:deploy /home/deploy/.ssh
sudo chmod 700 /home/deploy/.ssh
sudo chmod 600 /home/deploy/.ssh/authorized_keysStep 6: Configure Nginx as a Reverse Proxy
Nginx sits in front of your Next.js application to handle SSL, caching, and compression.
Install Nginx and Certbot
# Install Nginx
sudo apt install -y nginx
# Install Certbot for Let's Encrypt
sudo apt install -y certbot python3-certbot-nginxConfigure the virtual host
Create the Nginx configuration file:
# /etc/nginx/sites-available/my-app
upstream nextjs_upstream {
server 127.0.0.1:3000;
keepalive 64;
}
server {
listen 80;
server_name myapp.example.com;
# Redirect to HTTPS (will be configured by Certbot)
location / {
return 301 https://$server_name$request_uri;
}
}
server {
listen 443 ssl http2;
server_name myapp.example.com;
# SSL certificates will be added by Certbot
# ssl_certificate /etc/letsencrypt/live/myapp.example.com/fullchain.pem;
# ssl_certificate_key /etc/letsencrypt/live/myapp.example.com/privkey.pem;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains" always;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml application/json application/javascript
application/xml+rss application/atom+xml image/svg+xml;
# Cache Next.js static files
location /_next/static {
proxy_pass http://nextjs_upstream;
proxy_cache_valid 60m;
add_header Cache-Control "public, max-age=31536000, immutable";
}
# Static public files
location /images {
proxy_pass http://nextjs_upstream;
proxy_cache_valid 60m;
add_header Cache-Control "public, max-age=86400";
}
# Proxy to Next.js
location / {
proxy_pass http://nextjs_upstream;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
}Enable the site and SSL
# Enable the site
sudo ln -s /etc/nginx/sites-available/my-app /etc/nginx/sites-enabled/
# Check the configuration
sudo nginx -t
# Reload Nginx
sudo systemctl reload nginx
# Get the SSL certificate with Certbot
sudo certbot --nginx -d myapp.example.com --non-interactive --agree-tos -m your@email.com
# Verify automatic renewal
sudo certbot renew --dry-runCertbot will automatically modify your Nginx configuration to add SSL certificates. Renewal is automatic via a systemd timer.
Step 7: Zero-Downtime Deployment
To avoid interruptions during updates, we will configure a rolling deployment.
Advanced deployment script
Create scripts/deploy.sh on your VPS:
#!/bin/bash
# scripts/deploy.sh - Zero downtime deployment
set -euo pipefail
APP_DIR="/opt/apps/mon-app"
HEALTH_URL="http://localhost:3000/api/health"
MAX_RETRIES=30
RETRY_INTERVAL=2
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"
}
check_health() {
curl -sf "$HEALTH_URL" > /dev/null 2>&1
}
cd "$APP_DIR"
log "Starting deployment..."
# Pull the new image
log "Pulling the new image..."
docker compose -f docker-compose.prod.yml pull app
# Start the new container
log "Starting the new container..."
docker compose -f docker-compose.prod.yml up -d --no-deps app
# Wait for the healthcheck to pass
log "Verifying healthcheck..."
retries=0
until check_health; do
retries=$((retries + 1))
if [ "$retries" -ge "$MAX_RETRIES" ]; then
log "ERROR: Healthcheck failed after $MAX_RETRIES attempts"
log "Rolling back..."
docker compose -f docker-compose.prod.yml rollback app 2>/dev/null || true
exit 1
fi
log "Waiting for healthcheck... ($retries/$MAX_RETRIES)"
sleep "$RETRY_INTERVAL"
done
log "Healthcheck OK"
# Clean up old images
docker image prune -f
log "Deployment completed successfully"# Make the script executable
chmod +x /opt/apps/mon-app/scripts/deploy.shStep 8: Monitoring and Logs
Configure structured logging
Add logging configuration to your Next.js application:
// lib/logger.ts
type LogLevel = "info" | "warn" | "error" | "debug";
interface LogEntry {
level: LogLevel;
message: string;
timestamp: string;
[key: string]: unknown;
}
function log(level: LogLevel, message: string, meta?: Record<string, unknown>) {
const entry: LogEntry = {
level,
message,
timestamp: new Date().toISOString(),
...meta,
};
const output = JSON.stringify(entry);
if (level === "error") {
console.error(output);
} else if (level === "warn") {
console.warn(output);
} else {
console.log(output);
}
}
export const logger = {
info: (msg: string, meta?: Record<string, unknown>) => log("info", msg, meta),
warn: (msg: string, meta?: Record<string, unknown>) => log("warn", msg, meta),
error: (msg: string, meta?: Record<string, unknown>) => log("error", msg, meta),
debug: (msg: string, meta?: Record<string, unknown>) => log("debug", msg, meta),
};Useful monitoring commands
# View logs in real time
docker compose -f docker-compose.prod.yml logs -f app
# View container statistics
docker stats
# Check container status
docker compose -f docker-compose.prod.yml ps
# Check Docker disk usage
docker system df
# Clean up unused resources
docker system prune -fAdd an advanced healthcheck
Enrich your /api/health endpoint to include more information:
// app/api/health/route.ts
import { NextResponse } from "next/server";
export async function GET() {
const healthData = {
status: "healthy",
timestamp: new Date().toISOString(),
uptime: process.uptime(),
memory: {
used: Math.round(process.memoryUsage().heapUsed / 1024 / 1024),
total: Math.round(process.memoryUsage().heapTotal / 1024 / 1024),
unit: "MB",
},
version: process.env.npm_package_version || "unknown",
node: process.version,
};
return NextResponse.json(healthData);
}Step 9: Secure Everything
Environment variables
Create a .env.example file to document the required variables without exposing values:
# .env.example - Required environment variables
NODE_ENV=production
DATABASE_URL=postgresql://user:password@db:5432/dbname
REDIS_URL=redis://cache:6379
# Database
POSTGRES_USER=postgres
POSTGRES_PASSWORD=change_me
POSTGRES_DB=myapp
# Application
NEXTAUTH_SECRET=generate_a_strong_secret
NEXTAUTH_URL=https://myapp.example.comSecurity best practices
Here are the essential points to follow:
- Non-root user in the container (already configured in the Dockerfile)
- GitHub secrets for all sensitive information
- SSL certificate via Let's Encrypt (automatic renewal)
- Security headers in Nginx (HSTS, X-Frame-Options, etc.)
- Resource limits in Docker Compose (memory, CPU)
- Structured logs for anomaly detection
Always change default passwords. Use openssl rand -base64 32 to generate strong secrets. Never push .env files to your Git repository.
Test Your Complete Pipeline
Here is how to verify that everything works end to end:
1. Local test with Docker
# Build and start
docker compose -f docker-compose.dev.yml up --build -d
# Check logs
docker compose -f docker-compose.dev.yml logs -f
# Test the healthcheck
curl http://localhost:3000/api/health2. Test the CI/CD pipeline
# Create a test branch
git checkout -b test/cicd-pipeline
# Make a minor change
echo "// test" >> lib/logger.ts
# Commit and push
git add . && git commit -m "test: verify CI/CD pipeline"
git push -u origin test/cicd-pipeline
# Create a pull request on GitHub
gh pr create --title "Test CI/CD pipeline" --body "Pipeline verification"3. Verify the deployment
# After merging to main, verify the deployment
curl https://myapp.example.com/api/health
# Expected response:
# {"status":"healthy","timestamp":"...","uptime":42,"memory":{"used":85,"total":128,"unit":"MB"}}Troubleshooting
Docker build fails
| Problem | Solution |
|---|---|
COPY failed: file not found | Check that .dockerignore does not exclude necessary files |
npm ci fails | Make sure package-lock.json is present and up to date |
| Out of memory during build | Increase Docker Desktop memory or add --max-old-space-size=4096 |
GitHub Actions pipeline fails
| Problem | Solution |
|---|---|
| Permission denied (SSH) | Verify the SSH key is correctly configured in secrets |
| Image push refused | Check packages: write permissions in the workflow |
| Healthcheck timeout | Increase start-period and check application logs |
Deployment does not work
# Check container logs
docker compose -f docker-compose.prod.yml logs app --tail 50
# Check that ports are not in use
sudo ss -tulnp | grep 3000
# Restart services
docker compose -f docker-compose.prod.yml restart
# Fully rebuild
docker compose -f docker-compose.prod.yml up -d --build --force-recreateNext Steps
Now that your pipeline is in place, here is how to improve it:
- Add E2E tests with Playwright in the CI pipeline
- Set up notifications on Slack or Discord for deployments
- Set up a CDN (Cloudflare) in front of Nginx for global caching
- Add application monitoring with Sentry or Datadog
- Implement blue/green deployments for guaranteed zero-downtime
- Configure automatic backups of the PostgreSQL database
Conclusion
You now have a complete and professional deployment infrastructure for your Next.js application:
- Multi-stage Docker for lightweight and secure images
- Docker Compose for service orchestration
- GitHub Actions for continuous integration and deployment
- Nginx + SSL for a secure reverse proxy
- Monitoring with healthchecks and structured logs
This architecture is used in production by many companies and startups. It is reliable, reproducible, and easy to maintain. Every push to main automatically triggers a deployment verified by healthchecks.
The most important thing is to start simple and iterate. Deploy a basic version first, then progressively add layers of monitoring, security, and optimization.
Discuss Your Project with Us
We're here to help with your web development needs. Schedule a call to discuss your project and how we can assist you.
Let's find the best solutions for your needs.
Related Articles

AI Chatbot Integration Guide: Build Intelligent Conversational Interfaces
A comprehensive guide to integrating AI chatbots into your applications using OpenAI, Anthropic Claude, and ElevenLabs. Learn to build text and voice-enabled chatbots with Next.js.

Building a RAG Chatbot with Supabase pgvector and Next.js
Learn to build an AI chatbot that answers questions using your own data. This tutorial covers vector embeddings, semantic search, and RAG with Supabase and Next.js.

Building an Autonomous AI Agent with Agentic RAG and Next.js
Learn how to build an AI agent that autonomously decides when and how to retrieve information from vector databases. A comprehensive hands-on guide using Vercel AI SDK and Next.js with executable examples.