Deploy Docker Containers on Cloudflare's Edge with Cloudflare Containers

Real Docker, on the edge. Cloudflare Containers run actual OCI-compliant container images across Cloudflare's 300+ global locations, orchestrated by Workers. No Kubernetes, no cluster management, no cold starts. Pay per request and per second of compute — and only when traffic hits your container.
What You'll Build
In this tutorial, you'll deploy a containerized Node.js + Express image processing API to Cloudflare Containers. You'll route requests through a Worker, scale containers automatically based on traffic, and connect the whole stack to KV for caching. By the end you'll have a production-ready edge service that runs heavy workloads (image manipulation, PDF generation, ML inference) close to users worldwide.
Features of the final stack:
- A real Docker image running an Express API (Sharp for image resizing)
- A Worker acting as the front door, routing traffic, caching, and rate-limiting
- Per-region autoscaling driven by request volume
- KV-backed response caching to reduce container invocations
- Health checks, structured logging, and observability via Cloudflare dashboards
Prerequisites
Before you begin, make sure you have:
- Node.js 20+ installed (download here)
- Docker Desktop running locally (install Docker)
- A Cloudflare account on the Workers Paid plan (Containers requires it — about five dollars per month)
- Wrangler CLI v4+ — Cloudflare's developer tool
- Familiarity with Docker basics and TypeScript
- A code editor (VS Code recommended)
Cloudflare Containers is generally available as of early 2026. Each container instance gets up to 4 vCPU and 8 GB RAM, and you only pay for the seconds your container is actively serving requests, plus a small per-request fee.
Step 1: Install Wrangler and Authenticate
Open your terminal and install the latest Wrangler CLI globally:
npm install -g wrangler@latest
wrangler --version
# wrangler 4.x.xLog in to your Cloudflare account:
wrangler loginThis opens a browser tab so you can authorize the CLI. Once authorized, verify access:
wrangler whoamiYou should see your account email and the account ID. Copy the account ID — you'll need it later.
Step 2: Create the Project Structure
Create a new directory and initialize the project:
mkdir image-edge-api
cd image-edge-api
npm init -yInstall the runtime dependencies for the container application:
npm install express sharp
npm install -D typescript @types/node @types/express tsxCreate the directory structure:
mkdir -p container/src worker/src
touch container/src/index.ts container/Dockerfile
touch worker/src/index.ts wrangler.jsonc
touch tsconfig.json .dockerignoreAdd a basic tsconfig.json:
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"esModuleInterop": true,
"strict": true,
"skipLibCheck": true,
"outDir": "dist"
},
"include": ["container/src", "worker/src"]
}And a .dockerignore so we don't bloat the image:
node_modules
dist
.git
.env
*.md
worker
Step 3: Build the Containerized API
Open container/src/index.ts and write a small Express app that resizes images:
import express, { Request, Response } from "express";
import sharp from "sharp";
const app = express();
const PORT = Number(process.env.PORT) || 8080;
app.use(express.raw({ type: "image/*", limit: "10mb" }));
app.get("/health", (_req: Request, res: Response) => {
res.json({ status: "ok", region: process.env.CF_REGION ?? "unknown" });
});
app.post("/resize", async (req: Request, res: Response) => {
const width = Number(req.query.w) || 800;
const format = (req.query.fmt as string) || "webp";
if (!Buffer.isBuffer(req.body) || req.body.length === 0) {
return res.status(400).json({ error: "no image body" });
}
try {
const output = await sharp(req.body)
.resize({ width, withoutEnlargement: true })
.toFormat(format as keyof sharp.FormatEnum)
.toBuffer();
res.set("Content-Type", `image/${format}`);
res.set("X-Container-Region", process.env.CF_REGION ?? "unknown");
res.send(output);
} catch (err) {
console.error("resize_failed", err);
res.status(500).json({ error: "resize failed" });
}
});
app.listen(PORT, () => {
console.log(`image-edge-api listening on :${PORT}`);
});Notice we read CF_REGION from environment — Cloudflare injects this automatically into running containers, so we can debug regional routing.
Step 4: Write the Dockerfile
Cloudflare Containers accepts any standard OCI image. Use a small Node base image and a multi-stage build to keep the final image lean.
# container/Dockerfile
FROM node:20-bookworm-slim AS builder
WORKDIR /app
COPY package*.json tsconfig.json ./
RUN npm ci
COPY container/src ./container/src
RUN npx tsc -p tsconfig.json
FROM node:20-bookworm-slim AS runner
WORKDIR /app
ENV NODE_ENV=production
COPY package*.json ./
RUN npm ci --omit=dev
COPY --from=builder /app/dist ./dist
EXPOSE 8080
CMD ["node", "dist/container/src/index.js"]Build it locally to make sure everything compiles:
docker build -t image-edge-api -f container/Dockerfile .
docker run --rm -p 8080:8080 image-edge-apiIn another terminal, hit the health endpoint:
curl http://localhost:8080/health
# {"status":"ok","region":"unknown"}Press Ctrl+C to stop the local container.
Cloudflare Containers requires linux/amd64 images. If you're on Apple Silicon, build with the platform flag: docker build --platform=linux/amd64 ...
Step 5: Configure wrangler.jsonc for Containers
This is the important new piece. Cloudflare exposes Containers as a binding inside Workers — your Worker is the orchestrator that decides which container instance handles a request.
Open wrangler.jsonc:
{
"name": "image-edge-api",
"main": "worker/src/index.ts",
"compatibility_date": "2026-04-15",
"containers": [
{
"class_name": "ImageContainer",
"image": "./container/Dockerfile",
"instance_type": "standard",
"max_instances": 25
}
],
"durable_objects": {
"bindings": [
{
"name": "IMAGE_CONTAINER",
"class_name": "ImageContainer"
}
]
},
"migrations": [
{
"tag": "v1",
"new_sqlite_classes": ["ImageContainer"]
}
],
"kv_namespaces": [
{
"binding": "IMAGE_CACHE",
"id": "REPLACE_WITH_KV_ID"
}
],
"observability": { "enabled": true }
}A few things worth understanding here:
- The
containersblock tells Cloudflare which Dockerfile to build and how big each instance should be. Instance types aredev,basic,standard, andenhanced— pick based on memory and CPU needs. - Containers are exposed through Durable Objects. Each container is wrapped in a Durable Object class so you get isolation, regional pinning, and lifecycle hooks for free.
max_instancescaps how many concurrent containers Cloudflare will spin up globally.
Create the KV namespace and replace the placeholder:
wrangler kv namespace create IMAGE_CACHECopy the returned ID into wrangler.jsonc.
Step 6: Write the Worker Orchestrator
The Worker is the public entry point. It receives every request, applies caching and validation, then forwards traffic to the container.
Open worker/src/index.ts:
import { Container, getContainer } from "@cloudflare/containers";
export interface Env {
IMAGE_CONTAINER: DurableObjectNamespace<ImageContainer>;
IMAGE_CACHE: KVNamespace;
}
export class ImageContainer extends Container {
defaultPort = 8080;
sleepAfter = "5m";
override onStart() {
console.log("container_started", { id: this.ctx.id.toString() });
}
override onError(err: unknown) {
console.error("container_error", err);
}
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
if (url.pathname === "/health") {
const container = getContainer(env.IMAGE_CONTAINER, "primary");
return container.fetch(request);
}
if (url.pathname === "/resize" && request.method === "POST") {
const cacheKey = await buildCacheKey(request);
const cached = await env.IMAGE_CACHE.get(cacheKey, "stream");
if (cached) {
return new Response(cached, {
headers: { "Content-Type": "image/webp", "X-Cache": "HIT" },
});
}
const container = getContainer(env.IMAGE_CONTAINER, "primary");
const upstream = await container.fetch(request);
if (upstream.ok) {
const buf = await upstream.arrayBuffer();
await env.IMAGE_CACHE.put(cacheKey, buf, { expirationTtl: 86400 });
return new Response(buf, {
headers: {
"Content-Type": upstream.headers.get("Content-Type") ?? "image/webp",
"X-Cache": "MISS",
},
});
}
return upstream;
}
return new Response("Not Found", { status: 404 });
},
};
async function buildCacheKey(request: Request): Promise<string> {
const url = new URL(request.url);
const body = await request.clone().arrayBuffer();
const hash = await crypto.subtle.digest("SHA-256", body);
const hex = [...new Uint8Array(hash)]
.map((b) => b.toString(16).padStart(2, "0"))
.join("");
return `${url.search}:${hex}`;
}Install the Cloudflare Containers helper package:
npm install @cloudflare/containersThree behaviors are worth pointing out:
getContainer(namespace, name)returns a stub bound to a specific container instance. Pass the same name for sticky routing or randomize it for load distribution.sleepAftertells Cloudflare to spin the container down after five minutes of inactivity. You stop paying when nobody's using it, and warm starts are usually under 300 milliseconds.- The Worker handles caching upstream of the container. Cloudflare KV reads cost a fraction of container compute, so caching common transforms is a major savings lever.
Step 7: Local Development with Wrangler
Cloudflare's local dev environment now boots real containers via Docker — no mocking needed.
wrangler devWrangler will:
- Build the Docker image locally
- Start a containerized instance
- Spin up the Worker on http://localhost:8787
- Forward your HTTP requests through the orchestrator
Test the full flow:
curl http://localhost:8787/health
# {"status":"ok","region":"local"}
curl -X POST "http://localhost:8787/resize?w=400&fmt=webp" \
--data-binary "@./sample.jpg" \
-H "Content-Type: image/jpeg" \
-o resized.webpYou now have a fully working stack — Worker plus container — running on your laptop, behaving exactly the way it will in production.
Step 8: Deploy to Cloudflare's Edge
Once you're happy with the local behavior, push everything to production:
wrangler deployWrangler builds the Docker image, pushes it to Cloudflare's container registry, registers the Durable Object class, and rolls out the Worker. The first deploy can take three to five minutes because the image push is uncached. Subsequent deploys are usually under a minute.
When the deploy finishes, you'll see something like:
Deployed image-edge-api to:
https://image-edge-api.<your-subdomain>.workers.dev
Container instance class: ImageContainer (standard, max 25 instances)
Hit your live endpoint:
curl -X POST "https://image-edge-api.<subdomain>.workers.dev/resize?w=800" \
--data-binary "@./sample.jpg" \
-H "Content-Type: image/jpeg" \
-o output.webpThe first request triggers a cold container start, typically completing in around 600 milliseconds. Repeated requests reuse the warm instance and finish in tens of milliseconds.
Step 9: Configure Autoscaling and Regional Routing
By default, Cloudflare keeps a single container "primary" instance per region. For a real workload you'll want explicit scaling rules.
Update the containers block in wrangler.jsonc:
"containers": [
{
"class_name": "ImageContainer",
"image": "./container/Dockerfile",
"instance_type": "standard",
"max_instances": 50,
"scale_rules": {
"concurrent_requests": 30,
"cool_down_seconds": 60
},
"regions": ["wnam", "enam", "weu", "eeu", "apac", "mena"]
}
]What this configuration does:
- Cloudflare spins up an additional container in any region whose existing instances exceed 30 concurrent requests.
- After 60 seconds of below-threshold load, idle instances are spun down.
- The
regionsarray restricts containers to the listed Cloudflare network regions — useful for data residency or to avoid serving traffic from regions that wouldn't benefit from latency reduction.
For sticky routing (such as session-bound workloads), pass a stable name to getContainer. For embarrassingly parallel jobs, randomize:
const id = crypto.randomUUID();
const container = getContainer(env.IMAGE_CONTAINER, id);Redeploy to apply the changes:
wrangler deployStep 10: Health Checks, Logs, and Observability
Cloudflare automatically restarts unhealthy containers, but you control what "healthy" means. Extend ImageContainer in worker/src/index.ts:
export class ImageContainer extends Container {
defaultPort = 8080;
sleepAfter = "5m";
healthCheck = {
path: "/health",
intervalSeconds: 30,
timeoutSeconds: 5,
failureThreshold: 3,
};
override onHealthCheckFailed(err: unknown) {
console.error("health_check_failed", err);
}
}For logging, the Worker observability binding ("observability": { "enabled": true } in wrangler.jsonc) automatically streams every console.log from both the Worker and the container into the Cloudflare dashboard. Tail logs in real time:
wrangler tailYou'll see structured events as traffic flows:
container_started { id: "abc123..." }
resize ok width=800 region="weu" duration_ms=42
For deeper analytics, ship logs to an external sink (Datadog, Axiom, S3) using Logpush — configure it once in the dashboard and Cloudflare handles batching for you.
Step 11: Security and Secrets
Never bake credentials into the Docker image. Use Wrangler's encrypted secrets, which appear as environment variables inside the container:
wrangler secret put IMGBB_API_KEY
# paste value, press EnterInside container/src/index.ts:
const apiKey = process.env.IMGBB_API_KEY;
if (!apiKey) {
console.error("missing_api_key");
process.exit(1);
}For inbound auth, validate at the Worker tier before invoking the container:
const token = request.headers.get("Authorization");
if (token !== `Bearer ${env.SHARED_TOKEN}`) {
return new Response("Unauthorized", { status: 401 });
}This pattern saves money — unauthorized requests never reach the container, so you're not billed for compute on rejected traffic.
Testing Your Implementation
Run a small load test against your live endpoint with hey:
hey -n 1000 -c 50 -m POST \
-H "Content-Type: image/jpeg" \
-D ./sample.jpg \
"https://image-edge-api.<subdomain>.workers.dev/resize?w=400"In the Cloudflare dashboard you should observe:
- Multiple container instances spawning across regions as concurrency rises
- Cache hit rate climbing as repeated requests resolve from KV
- p50 latency under 80 milliseconds for warm requests, p99 under 500 milliseconds
If you see scaling failures or 5xx spikes, check wrangler tail and the Containers tab in the dashboard for restart counts and resource pressure.
Troubleshooting
Container fails to start with exit code 137. Out of memory. Bump to a larger instance type (enhanced) or profile your image with docker stats.
Cold starts above two seconds. Your image is too large. Multi-stage builds, slim base images, and pruning dev dependencies usually drop image size by 60 to 80 percent.
Worker errors with "container not bound". The Durable Object migration didn't run. Confirm migrations in wrangler.jsonc includes the class and redeploy.
wrangler dev hangs at "starting container". Docker Desktop isn't running or your image targets the wrong architecture. Build explicitly with --platform=linux/amd64.
KV cache returns stale results. Increase expirationTtl, or add a content hash to the cache key so different inputs never collide.
Next Steps
You now have a working Cloudflare Containers stack. Here are a few directions to take it further:
- Add a queue with Cloudflare Queues for async batch processing
- Plug in a database — see our Cloudflare Workers, Hono, and D1 tutorial for a sibling pattern
- Run an ML model —
transformers.jsor a CPU-friendly ONNX model fits comfortably in astandardinstance - Wrap it in an API gateway with Hono for richer routing
- Ship a fullstack app — combine Containers with Next.js on Workers
Conclusion
Cloudflare Containers fills the awkward middle ground between Workers (perfect for stateless, fast functions) and traditional Kubernetes (overkill for most teams). With Containers you get real Docker images, real CPU time, and real persistent processes — all served from Cloudflare's edge, with the Worker tier doing routing and caching.
In this tutorial you set up a complete edge container stack: a Docker image with Express and Sharp, a Worker orchestrator with KV caching, autoscaling, regional pinning, health checks, and observability. The same skeleton works for PDF generation, ML inference, headless browser tasks, and anything else that's too heavy for a Worker but too cheap to justify a dedicated cluster.
Cloudflare Containers turns the global edge into a Docker host — and once you've built a service this way, you'll find yourself reaching for it any time you need real compute close to users.
Discuss Your Project with Us
We're here to help with your web development needs. Schedule a call to discuss your project and how we can assist you.
Let's find the best solutions for your needs.
Related Articles

Docker Compose for Full-Stack Developers: Next.js, PostgreSQL, and Redis
Learn how to containerize a full-stack Next.js application with PostgreSQL and Redis using Docker Compose. This hands-on tutorial covers multi-service orchestration, development workflows, hot reloading, health checks, and production-ready configurations.

Building Production-Ready REST APIs with FastAPI, PostgreSQL, and Docker
Learn how to build, test, and deploy a production-grade REST API using Python's FastAPI framework with PostgreSQL, SQLAlchemy, Alembic migrations, and Docker Compose — from zero to deployment.

Build and Deploy a Serverless API with Cloudflare Workers, Hono, and D1
Learn how to build a production-ready serverless REST API using Cloudflare Workers, the Hono web framework, and D1 SQLite database — from project setup to global deployment.