writing/tutorial/2026/05
TutorialMay 9, 2026·35 min read

Deploy Next.js to Cloudflare Workers with OpenNext: A Complete 2026 Guide

Ship a full Next.js 15 application — App Router, Server Actions, ISR, and dynamic routes — to Cloudflare Workers using the OpenNext adapter. Learn bindings, caching, R2, D1, and zero-cold-start production deployment.

Run Next.js anywhere — even where Node.js cannot. OpenNext is an open-source adapter that compiles a standard Next.js application to run on serverless platforms that do not ship a full Node.js runtime, such as Cloudflare Workers. With the official @opennextjs/cloudflare adapter you keep the Next.js developer experience while gaining sub-millisecond cold starts, 300+ edge locations, and direct bindings to D1, R2, KV, and Durable Objects.

What You'll Build

In this tutorial, you'll deploy a production-ready Next.js 15 application to Cloudflare Workers using OpenNext. The app is a small SaaS-style dashboard that demonstrates every feature you typically rely on:

  • App Router with nested layouts and React Server Components
  • Server Actions writing to a Cloudflare D1 database
  • Image uploads to R2 (S3-compatible object storage)
  • Incremental Static Regeneration (ISR) backed by Cloudflare KV
  • Edge middleware for auth and i18n
  • Dynamic routes with generateStaticParams and on-demand revalidation
  • Streaming and Suspense with Server Components

By the end you'll have a globally distributed Next.js app that is cheaper, faster, and simpler to operate than a traditional Node.js deployment — without rewriting a single page.

Why OpenNext?

Vercel is the easiest path to ship Next.js, but it's not the only one. Self-hosting Next.js on Node.js works, yet it requires an always-on server, regional latency, and ops overhead. OpenNext was created to bridge that gap by translating the Next.js build output into platform-specific bundles.

The Cloudflare adapter (@opennextjs/cloudflare) takes the Next.js standalone build and rewrites it to run inside the Workers V8 isolate runtime. Compared to traditional Node.js hosting, you get:

  • Sub-millisecond cold starts — V8 isolates spin up almost instantly
  • 300+ points of presence — your app runs at the edge worldwide
  • Direct platform bindings — D1, R2, KV, Durable Objects, Queues, Vectorize accessible as JavaScript objects with no SDK overhead
  • Pricing that scales to zero — pay per request, not per running instance
  • No Docker, no Kubernetes, no PM2wrangler deploy and you're live

OpenNext supports Next.js 14 and Next.js 15, including Server Actions, Partial Prerendering, and the React 19 features that landed in 2025.

Prerequisites

Before starting, ensure you have:

  • Node.js 20 or newer (node --version)
  • npm 10+, pnpm 9+, or bun 1.2+
  • A Cloudflare account (the free plan is enough to follow along)
  • The Wrangler CLI installed globally: npm install -g wrangler@latest
  • Basic familiarity with Next.js App Router and TypeScript
  • A code editor (VS Code recommended)

Authenticate Wrangler against your Cloudflare account once:

wrangler login

This opens a browser window and stores credentials locally so subsequent commands authenticate automatically.

Step 1: Create a New Next.js Project

Start from a fresh Next.js 15 project. The OpenNext team maintains a starter template, but for clarity we'll go from scratch so you understand every moving part.

npx create-next-app@latest noqta-edge-app \
  --typescript \
  --app \
  --tailwind \
  --eslint \
  --src-dir \
  --import-alias "@/*"
 
cd noqta-edge-app

Confirm everything runs locally:

npm run dev

Open http://localhost:3000 and you should see the default Next.js landing page. Stop the dev server before continuing.

Step 2: Install the OpenNext Cloudflare Adapter

Install the adapter and its peer dependencies:

npm install --save-dev @opennextjs/cloudflare wrangler@latest
npm install @cloudflare/workers-types

The @opennextjs/cloudflare package contains both the build adapter and helpers for accessing Cloudflare bindings from inside Next.js code. The @cloudflare/workers-types package provides TypeScript types for D1, R2, KV, Queues, and the rest of the Workers API.

Add the Workers types to tsconfig.json so editor autocomplete works correctly:

{
  "compilerOptions": {
    "types": ["@cloudflare/workers-types"]
  }
}

Step 3: Configure wrangler.toml

Create a wrangler.toml file in the project root. This file tells Wrangler what to deploy, where to deploy it, and which Cloudflare resources to bind.

name = "noqta-edge-app"
main = ".open-next/worker.js"
compatibility_date = "2026-05-01"
compatibility_flags = ["nodejs_compat"]
 
# Required by OpenNext: serve static assets directly
[assets]
directory = ".open-next/assets"
binding = "ASSETS"
 
# Cache backed by KV — used by ISR and the fetch cache
[[kv_namespaces]]
binding = "NEXT_INC_CACHE_KV"
id = "REPLACE_WITH_KV_ID"
 
# Database for Server Actions
[[d1_databases]]
binding = "DB"
database_name = "noqta-edge-db"
database_id = "REPLACE_WITH_D1_ID"
 
# Object storage for uploads
[[r2_buckets]]
binding = "UPLOADS"
bucket_name = "noqta-edge-uploads"

The compatibility_flags = ["nodejs_compat"] line is critical. It enables Cloudflare's polyfilled Node.js APIs that Next.js relies on (Buffer, crypto, util, etc.). Without it your build will run, but several Next.js internals will throw at runtime.

Step 4: Create the Cloudflare Resources

Provision the KV namespace, D1 database, and R2 bucket from the command line. Each command prints the ID you need to paste back into wrangler.toml.

# KV namespace for ISR / cache
wrangler kv namespace create NEXT_INC_CACHE_KV
 
# D1 database
wrangler d1 create noqta-edge-db
 
# R2 bucket
wrangler r2 bucket create noqta-edge-uploads

Update wrangler.toml with the IDs returned by the first two commands. R2 buckets are referenced by name, so no ID is required there.

Now create a database schema. Save the following file as schema.sql:

CREATE TABLE IF NOT EXISTS posts (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  title TEXT NOT NULL,
  body TEXT NOT NULL,
  created_at INTEGER NOT NULL DEFAULT (unixepoch())
);
 
CREATE INDEX IF NOT EXISTS idx_posts_created_at
  ON posts (created_at DESC);

Apply it locally and remotely:

# Apply to the local D1 emulator
wrangler d1 execute noqta-edge-db --local --file=schema.sql
 
# Apply to the production database
wrangler d1 execute noqta-edge-db --remote --file=schema.sql

Step 5: Add the OpenNext Build Configuration

OpenNext needs a tiny configuration file to know how to build the project. Create open-next.config.ts in the project root:

import { defineCloudflareConfig } from "@opennextjs/cloudflare";
import kvIncrementalCache from "@opennextjs/cloudflare/overrides/incremental-cache/kv-incremental-cache";
 
export default defineCloudflareConfig({
  incrementalCache: kvIncrementalCache,
});

This single file wires up KV-based ISR caching. The adapter ships with several override modules (R2 cache, regional cache, durable object cache); KV is the default and works for most apps.

Update package.json with build and deploy scripts:

{
  "scripts": {
    "dev": "next dev",
    "build": "next build",
    "preview": "opennextjs-cloudflare build && wrangler dev",
    "deploy": "opennextjs-cloudflare build && wrangler deploy"
  }
}

The build script runs the standard Next.js build. The deploy script runs the OpenNext build (which transforms the Next.js output into a Worker bundle) and then ships it with Wrangler.

Step 6: Read Bindings From Inside Next.js

The cleanest way to access Cloudflare bindings from Server Components, Route Handlers, or Server Actions is through OpenNext's getCloudflareContext() helper. Create src/lib/cf.ts:

import { getCloudflareContext } from "@opennextjs/cloudflare";
import type { D1Database, R2Bucket, KVNamespace } from "@cloudflare/workers-types";
 
interface Env {
  DB: D1Database;
  UPLOADS: R2Bucket;
  NEXT_INC_CACHE_KV: KVNamespace;
}
 
export async function getEnv(): Promise<Env> {
  const ctx = await getCloudflareContext({ async: true });
  return ctx.env as unknown as Env;
}

Now any Server Component can pull the environment in a single line:

const { DB } = await getEnv();

Type-safe, zero boilerplate, and fully typed against your project's bindings.

Step 7: Build a Server Action With D1

Server Actions are the cleanest way to mutate data in Next.js 15, and they work flawlessly on OpenNext. Create src/app/posts/actions.ts:

"use server";
 
import { revalidatePath } from "next/cache";
import { redirect } from "next/navigation";
import { getEnv } from "@/lib/cf";
 
export async function createPost(formData: FormData) {
  const title = String(formData.get("title") ?? "").trim();
  const body = String(formData.get("body") ?? "").trim();
 
  if (!title || !body) {
    throw new Error("Title and body are required");
  }
 
  const { DB } = await getEnv();
 
  await DB.prepare(
    "INSERT INTO posts (title, body) VALUES (?, ?)"
  )
    .bind(title, body)
    .run();
 
  revalidatePath("/posts");
  redirect("/posts");
}

Then a Server Component that lists and creates posts in src/app/posts/page.tsx:

import { getEnv } from "@/lib/cf";
import { createPost } from "./actions";
 
export const revalidate = 60;
 
interface Post {
  id: number;
  title: string;
  body: string;
  created_at: number;
}
 
export default async function PostsPage() {
  const { DB } = await getEnv();
 
  const result = await DB.prepare(
    "SELECT id, title, body, created_at FROM posts ORDER BY created_at DESC LIMIT 50"
  ).all<Post>();
 
  const posts = result.results ?? [];
 
  return (
    <main className="mx-auto max-w-2xl p-8">
      <h1 className="text-3xl font-bold">Posts</h1>
 
      <form action={createPost} className="mt-6 space-y-3">
        <input
          name="title"
          placeholder="Title"
          className="w-full rounded border p-2"
          required
        />
        <textarea
          name="body"
          placeholder="What's on your mind?"
          className="w-full rounded border p-2 h-28"
          required
        />
        <button
          type="submit"
          className="rounded bg-orange-500 px-4 py-2 font-medium text-white"
        >
          Publish
        </button>
      </form>
 
      <ul className="mt-10 space-y-6">
        {posts.map((p) => (
          <li key={p.id} className="rounded border p-4">
            <h2 className="text-xl font-semibold">{p.title}</h2>
            <p className="mt-2 text-gray-700">{p.body}</p>
          </li>
        ))}
      </ul>
    </main>
  );
}

The export const revalidate = 60 directive is honored by OpenNext: the rendered page is stored in KV and served from cache for up to 60 seconds, dramatically reducing D1 reads under load.

Step 8: Upload Files to R2

R2 is Cloudflare's S3-compatible object store with zero egress fees. Add a Route Handler at src/app/api/upload/route.ts:

import { NextRequest, NextResponse } from "next/server";
import { getEnv } from "@/lib/cf";
 
export async function POST(req: NextRequest) {
  const formData = await req.formData();
  const file = formData.get("file");
 
  if (!(file instanceof File)) {
    return NextResponse.json({ error: "No file provided" }, { status: 400 });
  }
 
  const { UPLOADS } = await getEnv();
  const key = `uploads/${crypto.randomUUID()}-${file.name}`;
 
  await UPLOADS.put(key, await file.arrayBuffer(), {
    httpMetadata: { contentType: file.type },
  });
 
  return NextResponse.json({
    key,
    url: `/api/files/${encodeURIComponent(key)}`,
  });
}

And a matching read handler at src/app/api/files/[...key]/route.ts:

import { NextRequest } from "next/server";
import { getEnv } from "@/lib/cf";
 
export async function GET(
  _req: NextRequest,
  { params }: { params: Promise<{ key: string[] }> }
) {
  const { key } = await params;
  const fullKey = key.join("/");
 
  const { UPLOADS } = await getEnv();
  const obj = await UPLOADS.get(fullKey);
 
  if (!obj) {
    return new Response("Not found", { status: 404 });
  }
 
  return new Response(obj.body, {
    headers: {
      "content-type": obj.httpMetadata?.contentType ?? "application/octet-stream",
      "cache-control": "public, max-age=31536000, immutable",
    },
  });
}

R2 streams responses directly through the Worker — no buffering, no double-hop. You can serve assets larger than 100 MB without breaking a sweat.

Step 9: Run the App Locally on Wrangler

Next.js's own dev server cannot expose Cloudflare bindings, so for end-to-end local testing use Wrangler:

npm run preview

This first runs opennextjs-cloudflare build, which produces a .open-next directory containing the Worker entry point and static assets. Wrangler then serves the Worker on http://localhost:8787, with full access to your local D1 database, KV namespace, and R2 bucket emulators.

You should see logs like:

Building Next.js (production)...
Building OpenNext Cloudflare bundle...
Bundling worker.js (4.2 MB minified, 1.1 MB gzipped)
Ready on http://localhost:8787

Hit the /posts route, submit the form, and confirm rows appear in the local D1 instance:

wrangler d1 execute noqta-edge-db --local --command="SELECT * FROM posts;"

Step 10: Deploy to Production

When you're happy locally, ship to production with one command:

npm run deploy

Wrangler uploads the bundle, registers the Worker on a *.workers.dev URL, and binds your KV, D1, and R2 resources. Within a couple of seconds your Next.js app is running on every Cloudflare data center on the planet.

Bind a custom domain through the Cloudflare dashboard (Workers and Pages, then Custom Domains) or via Wrangler:

wrangler deploy --routes "edge.noqta.tn/*"

Cloudflare automatically provisions and renews TLS certificates — no Let's Encrypt scripts to maintain.

Caching, ISR, and revalidatePath

OpenNext implements the full Next.js caching contract on top of Cloudflare KV:

  • Static pages are written once at build time and pushed to KV
  • ISR pages (export const revalidate = N) are revalidated in the background on the first request after expiration
  • revalidatePath() and revalidateTag() in Server Actions purge the relevant KV entries instantly
  • The fetch cache (fetch(url, { next: { revalidate: 300 } })) is also stored in KV

Because KV is replicated to every edge location, a cache hit anywhere in the world is served in under 50 ms — usually closer to 5 ms.

If you need lower-latency dynamic content (per-request, no caching), OpenNext also exposes a regional cache override and a Durable Object cache override for strongly consistent invalidation. See the OpenNext docs for the trade-offs.

Edge Middleware

Next.js middleware runs at the edge by default — and with OpenNext, that means inside the same Worker. A simple auth gate looks identical to a Vercel deployment:

import { NextRequest, NextResponse } from "next/server";
 
export function middleware(req: NextRequest) {
  const session = req.cookies.get("session");
 
  if (!session && req.nextUrl.pathname.startsWith("/dashboard")) {
    const loginUrl = new URL("/login", req.nextUrl);
    loginUrl.searchParams.set("from", req.nextUrl.pathname);
    return NextResponse.redirect(loginUrl);
  }
 
  return NextResponse.next();
}
 
export const config = {
  matcher: ["/dashboard/:path*"],
};

No special imports, no platform-specific wrappers — the same code that runs on Vercel runs on Cloudflare Workers.

Observability

Cloudflare ships first-class observability for Workers:

  • Real-time logs with wrangler tail
  • Workers Analytics in the dashboard (request counts, error rates, p50/p99 latency)
  • Tail Workers for forwarding logs to Datadog, Logflare, or Honeycomb
  • OpenTelemetry support via the otel-cf-workers package

Enable structured logging by simply using console.log() in your Server Components — Wrangler captures them with millisecond timestamps and request IDs.

Testing Your Implementation

Verify the deployment works end-to-end:

  1. Visit your production URL and load /posts
  2. Submit the form with a new title and body
  3. Confirm the new post appears (Server Action wrote to D1)
  4. Check wrangler tail for the live request log
  5. Run wrangler d1 execute noqta-edge-db --remote --command="SELECT COUNT(*) FROM posts;" and verify the row count
  6. Upload a file via the /api/upload endpoint and read it back through /api/files/[key]

If all six steps pass, you have a fully functional Next.js application running on Cloudflare Workers.

Troubleshooting

Build fails with Cannot find module 'node:fs' — make sure compatibility_flags = ["nodejs_compat"] is set in wrangler.toml, and that your compatibility_date is on or after 2024-09-23.

getCloudflareContext() throws during local Next.js dev — Next.js's next dev command does not expose bindings. Use npm run preview (which calls wrangler dev) for any code path that touches D1, R2, or KV during development.

Worker bundle exceeds 10 MB — the free plan limit. Use the unstable_streamingFetch setting in OpenNext to tree-shake unused server-only dependencies. The paid plan raises the limit to 25 MB.

ISR pages are not refreshing — confirm NEXT_INC_CACHE_KV is bound and that you have not accidentally set dynamic = "force-static". Tail the Worker and look for the cache key on each request.

Static assets return 404 — the [assets] block in wrangler.toml must point at .open-next/assets. Check the directory exists after running the build.

Next Steps

  • Add authentication with Better Auth or Auth.js v5 — both work on Workers
  • Stream AI completions through Workers AI or the OpenAI Realtime API
  • Move long jobs to Cloudflare Queues triggered from Server Actions
  • Add Vectorize for semantic search and RAG on top of your D1 data
  • Set up Wrangler environments (--env staging, --env production) for safe multi-stage deploys
  • Add a GitHub Actions workflow that runs npm run deploy on every push to main

Conclusion

OpenNext makes it possible to take any Next.js 15 application — App Router, Server Actions, ISR, middleware, the works — and run it on Cloudflare Workers with a single configuration file and a single deploy command. You get global distribution, instant cold starts, and direct access to first-party platform primitives like D1, R2, and KV, all while keeping the standard Next.js developer experience that your team already knows.

For a small SaaS or a high-traffic content site, this stack is dramatically cheaper than a Node.js host and significantly faster than a single-region Vercel function. For an enterprise workload, the same patterns scale to billions of requests without ever touching a Kubernetes cluster.

The Next.js you know, deployed everywhere. That's the OpenNext promise — and in 2026 it's finally a production reality.