Upstash Redis and Next.js: Rate Limiting, Caching, and Message Queues

AI Bot
By AI Bot ·

Loading the Text to Speech Audio Player...

Modern web applications face three recurring challenges: protecting APIs from abuse, speeding up server responses, and processing background operations reliably. Redis is the classic solution for all three — but hosting and maintaining a Redis server in production requires time and operational complexity.

Upstash Redis eliminates this friction by offering a serverless Redis, billed per request, with a native TypeScript SDK. No server to manage, no persistent connection to maintain — every call goes through HTTP, making it perfect for serverless environments like Vercel, Cloudflare Workers, or AWS Lambda.

In this tutorial, you will build a contact management API with Next.js that integrates three essential Redis patterns:

  • Rate limiting — throttle requests per IP to protect your endpoints
  • Caching — cache API responses to reduce latency and database load
  • Message queues — process asynchronous tasks with QStash (Upstash's serverless messaging service)

Prerequisites

Before starting, make sure you have:

  • Node.js 20+ installed
  • A free Upstash account — sign up at console.upstash.com
  • Basic knowledge of React and TypeScript
  • Familiarity with Next.js App Router (API routes, Server Actions)
  • A code editor (VS Code recommended)

What You'll Build

A contact management system with:

  • Per-IP rate limiting — blocks clients exceeding 10 requests per minute
  • Smart caching — stores frequently queried results with automatic invalidation
  • Processing queue — sends welcome emails via QStash when a contact is created
  • Monitoring dashboard — visualizes Redis metrics in real time

Step 1: Create the Next.js Project

Initialize a new Next.js project with TypeScript and the App Router:

npx create-next-app@latest upstash-contacts --typescript --tailwind --app --src-dir
cd upstash-contacts

Install the Upstash dependencies:

npm install @upstash/redis @upstash/ratelimit @upstash/qstash

The @upstash/redis package provides the HTTP Redis client. @upstash/ratelimit offers ready-to-use rate limiting algorithms. @upstash/qstash handles serverless message queues.

Step 2: Configure Upstash Redis

Log in to the Upstash console and create a new Redis database. Choose the region closest to your deployment server (e.g., eu-west-1 for a European deployment).

Copy the connection variables and add them to your .env.local file:

UPSTASH_REDIS_REST_URL=https://your-instance.upstash.io
UPSTASH_REDIS_REST_TOKEN=AXxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
QSTASH_TOKEN=eyJxxxxxxxxxxxxxxxxxxxxxxxxx
QSTASH_CURRENT_SIGNING_KEY=sig_xxxxxxxxxxxxxxxx
QSTASH_NEXT_SIGNING_KEY=sig_xxxxxxxxxxxxxxxx

Now create the shared Redis configuration file:

// src/lib/redis.ts
import { Redis } from "@upstash/redis";
 
export const redis = new Redis({
  url: process.env.UPSTASH_REDIS_REST_URL!,
  token: process.env.UPSTASH_REDIS_REST_TOKEN!,
});

This client uses the HTTP protocol — no persistent TCP connection. Each call is an independent REST request, which is ideal for serverless functions that can be created and destroyed at any time.

Step 3: Implement Rate Limiting

Rate limiting protects your APIs from abuse and denial-of-service attacks. Upstash provides a dedicated module with multiple algorithms.

Create the Rate Limiter

// src/lib/rate-limit.ts
import { Ratelimit } from "@upstash/ratelimit";
import { redis } from "./redis";
 
// Sliding window algorithm: 10 requests per minute per identifier
export const ratelimit = new Ratelimit({
  redis,
  limiter: Ratelimit.slidingWindow(10, "1 m"),
  analytics: true, // Enables tracking on the Upstash dashboard
  prefix: "ratelimit:contacts-api",
});

The sliding window is the best choice for most use cases. Unlike the fixed window that brutally resets the counter, the sliding window distributes requests evenly over time.

Upstash offers three algorithms:

  • Fixed Window — simple counter that resets at a fixed interval
  • Sliding Window — smoothing between two windows to avoid spikes
  • Token Bucket — allows controlled bursts, ideal for APIs with irregular traffic patterns

Create the Rate Limiting Middleware

// src/middleware.ts
import { NextRequest, NextResponse } from "next/server";
import { Ratelimit } from "@upstash/ratelimit";
import { Redis } from "@upstash/redis";
 
const ratelimit = new Ratelimit({
  redis: new Redis({
    url: process.env.UPSTASH_REDIS_REST_URL!,
    token: process.env.UPSTASH_REDIS_REST_TOKEN!,
  }),
  limiter: Ratelimit.slidingWindow(10, "1 m"),
  analytics: true,
  prefix: "ratelimit:api",
});
 
export async function middleware(request: NextRequest) {
  // Apply only to API routes
  if (!request.nextUrl.pathname.startsWith("/api")) {
    return NextResponse.next();
  }
 
  const ip = request.headers.get("x-forwarded-for") ?? "127.0.0.1";
  const { success, limit, remaining, reset } = await ratelimit.limit(ip);
 
  if (!success) {
    return NextResponse.json(
      { error: "Too many requests. Please try again later." },
      {
        status: 429,
        headers: {
          "X-RateLimit-Limit": limit.toString(),
          "X-RateLimit-Remaining": remaining.toString(),
          "X-RateLimit-Reset": reset.toString(),
          "Retry-After": Math.ceil((reset - Date.now()) / 1000).toString(),
        },
      }
    );
  }
 
  const response = NextResponse.next();
  response.headers.set("X-RateLimit-Limit", limit.toString());
  response.headers.set("X-RateLimit-Remaining", remaining.toString());
  response.headers.set("X-RateLimit-Reset", reset.toString());
 
  return response;
}
 
export const config = {
  matcher: "/api/:path*",
};

This middleware intercepts all requests to /api/*, checks the rate limit per IP, and returns a 429 Too Many Requests status if the client has exceeded the limit. The X-RateLimit-* headers let the client know how many requests they have remaining.

Step 4: Implement Caching

Caching reduces latency and database load. Upstash Redis is ideal for this thanks to its built-in TTL (Time To Live) and compatibility with Redis data structures.

Cache-Aside Pattern

The most common pattern is cache-aside: check the cache before querying the source, and store the result in the cache for subsequent requests.

// src/lib/cache.ts
import { redis } from "./redis";
 
interface CacheOptions {
  ttl?: number; // Time to live in seconds (default: 60)
  prefix?: string;
}
 
export async function cached<T>(
  key: string,
  fetcher: () => Promise<T>,
  options: CacheOptions = {}
): Promise<T> {
  const { ttl = 60, prefix = "cache" } = options;
  const cacheKey = `${prefix}:${key}`;
 
  // 1. Check the cache
  const cached = await redis.get<T>(cacheKey);
  if (cached !== null) {
    return cached;
  }
 
  // 2. Execute the fetcher if not cached
  const data = await fetcher();
 
  // 3. Store in cache with TTL
  await redis.set(cacheKey, JSON.stringify(data), { ex: ttl });
 
  return data;
}
 
export async function invalidateCache(pattern: string): Promise<void> {
  // Get all keys matching the pattern
  const keys = await redis.keys(`cache:${pattern}*`);
  if (keys.length > 0) {
    await redis.del(...keys);
  }
}

Using the Cache in an API Route

// src/app/api/contacts/route.ts
import { NextRequest, NextResponse } from "next/server";
import { cached, invalidateCache } from "@/lib/cache";
 
// Simulate a database
const db = {
  contacts: [
    { id: "1", name: "Amine Ben Ali", email: "amine@example.com" },
    { id: "2", name: "Sara Trabelsi", email: "sara@example.com" },
    { id: "3", name: "Youssef Hamdi", email: "youssef@example.com" },
  ],
};
 
export async function GET(request: NextRequest) {
  const searchParams = request.nextUrl.searchParams;
  const query = searchParams.get("q") || "";
 
  const contacts = await cached(
    `contacts:list:${query}`,
    async () => {
      // Simulate a slow database query
      await new Promise((resolve) => setTimeout(resolve, 500));
 
      if (query) {
        return db.contacts.filter(
          (c) =>
            c.name.toLowerCase().includes(query.toLowerCase()) ||
            c.email.toLowerCase().includes(query.toLowerCase())
        );
      }
      return db.contacts;
    },
    { ttl: 30 } // Cache for 30 seconds
  );
 
  return NextResponse.json({ contacts });
}
 
export async function POST(request: NextRequest) {
  const body = await request.json();
  const { name, email } = body;
 
  if (!name || !email) {
    return NextResponse.json(
      { error: "Name and email are required" },
      { status: 400 }
    );
  }
 
  const newContact = {
    id: Date.now().toString(),
    name,
    email,
  };
 
  db.contacts.push(newContact);
 
  // Invalidate cache after a write operation
  await invalidateCache("contacts:");
 
  return NextResponse.json({ contact: newContact }, { status: 201 });
}

The key point is cache invalidation during writes. Without it, users would see stale data. The pattern invalidateCache("contacts:") removes all cache entries whose key starts with cache:contacts:.

Advanced Caching: Stale-While-Revalidate

For data that changes frequently but tolerates a slight delay, the stale-while-revalidate pattern offers the best trade-off between performance and freshness:

// src/lib/swr-cache.ts
import { redis } from "./redis";
 
interface SWRCacheOptions {
  ttl: number; // "Fresh" time to live in seconds
  staleFor: number; // Additional time the data is served but revalidated
  prefix?: string;
}
 
interface CachedEntry<T> {
  data: T;
  cachedAt: number;
}
 
export async function swrCached<T>(
  key: string,
  fetcher: () => Promise<T>,
  options: SWRCacheOptions
): Promise<T> {
  const { ttl, staleFor, prefix = "swr" } = options;
  const cacheKey = `${prefix}:${key}`;
 
  const entry = await redis.get<CachedEntry<T>>(cacheKey);
 
  if (entry) {
    const age = (Date.now() - entry.cachedAt) / 1000;
 
    if (age <= ttl) {
      // Fresh data — serve directly
      return entry.data;
    }
 
    if (age <= ttl + staleFor) {
      // Stale but acceptable — serve and revalidate in the background
      revalidate(cacheKey, fetcher, ttl + staleFor);
      return entry.data;
    }
  }
 
  // No cache or too old — fetch and store
  const data = await fetcher();
  await redis.set(
    cacheKey,
    JSON.stringify({ data, cachedAt: Date.now() }),
    { ex: ttl + staleFor }
  );
 
  return data;
}
 
async function revalidate<T>(
  key: string,
  fetcher: () => Promise<T>,
  totalTtl: number
): Promise<void> {
  try {
    const data = await fetcher();
    await redis.set(
      key,
      JSON.stringify({ data, cachedAt: Date.now() }),
      { ex: totalTtl }
    );
  } catch {
    // Silent — stale data is already served
  }
}

Step 5: Message Queues with QStash

QStash is Upstash's serverless messaging service. It allows you to trigger background tasks without a dedicated server — QStash sends an HTTP request to your endpoint when it is time to execute the task.

Configure QStash

// src/lib/qstash.ts
import { Client } from "@upstash/qstash";
 
export const qstash = new Client({
  token: process.env.QSTASH_TOKEN!,
});

Publish a Message to the Queue

When a new contact is created, we send a message to QStash to trigger a welcome email:

// src/app/api/contacts/route.ts (updated POST)
import { qstash } from "@/lib/qstash";
 
export async function POST(request: NextRequest) {
  const body = await request.json();
  const { name, email } = body;
 
  if (!name || !email) {
    return NextResponse.json(
      { error: "Name and email are required" },
      { status: 400 }
    );
  }
 
  const newContact = {
    id: Date.now().toString(),
    name,
    email,
  };
 
  db.contacts.push(newContact);
  await invalidateCache("contacts:");
 
  // Publish a message to send a welcome email
  await qstash.publishJSON({
    url: `${process.env.NEXT_PUBLIC_APP_URL}/api/webhooks/welcome-email`,
    body: {
      contactId: newContact.id,
      name: newContact.name,
      email: newContact.email,
    },
    retries: 3,
    delay: 5, // Wait 5 seconds before the first attempt
  });
 
  return NextResponse.json({ contact: newContact }, { status: 201 });
}

Create the Consumer Webhook

// src/app/api/webhooks/welcome-email/route.ts
import { NextRequest, NextResponse } from "next/server";
import { verifySignatureAppRouter } from "@upstash/qstash/nextjs";
 
async function handler(request: NextRequest) {
  const body = await request.json();
  const { contactId, name, email } = body;
 
  console.log(`Sending welcome email to ${name} (${email})`);
 
  // Here, integrate your email service (Resend, SendGrid, etc.)
  await simulateSendEmail({
    to: email,
    subject: `Welcome ${name}!`,
    body: `Thanks for joining us. Your account is ready.`,
  });
 
  return NextResponse.json({ success: true });
}
 
async function simulateSendEmail(params: {
  to: string;
  subject: string;
  body: string;
}) {
  // Simulate sending delay
  await new Promise((resolve) => setTimeout(resolve, 1000));
  console.log(`Email sent to ${params.to}: ${params.subject}`);
}
 
// Verify QStash signature to secure the webhook
export const POST = verifySignatureAppRouter(handler);

The verifySignatureAppRouter function is crucial — it verifies that the request actually comes from QStash and not from an attacker. QStash signs every message using the keys defined in your environment variables.

Schedule Recurring Tasks

QStash also supports scheduled tasks (cron). For example, sending a daily report:

// src/lib/scheduled-tasks.ts
import { qstash } from "./qstash";
 
export async function setupDailyReport() {
  await qstash.schedules.create({
    destination: `${process.env.NEXT_PUBLIC_APP_URL}/api/webhooks/daily-report`,
    cron: "0 9 * * *", // Every day at 9 AM
    retries: 3,
  });
}
// src/app/api/webhooks/daily-report/route.ts
import { NextRequest, NextResponse } from "next/server";
import { verifySignatureAppRouter } from "@upstash/qstash/nextjs";
import { redis } from "@/lib/redis";
 
async function handler(request: NextRequest) {
  // Collect daily metrics
  const totalRequests = await redis.get<number>("metrics:total-requests") || 0;
  const newContacts = await redis.get<number>("metrics:new-contacts") || 0;
  const rateLimitHits = await redis.get<number>("metrics:rate-limit-hits") || 0;
 
  const report = {
    date: new Date().toISOString().split("T")[0],
    totalRequests,
    newContacts,
    rateLimitHits,
  };
 
  console.log("Daily report:", report);
 
  // Reset counters
  await redis.set("metrics:total-requests", 0);
  await redis.set("metrics:new-contacts", 0);
  await redis.set("metrics:rate-limit-hits", 0);
 
  return NextResponse.json({ report });
}
 
export const POST = verifySignatureAppRouter(handler);

Step 6: Build the Monitoring Dashboard

Create a React component to visualize Redis metrics in real time.

Metrics API Route

// src/app/api/metrics/route.ts
import { NextResponse } from "next/server";
import { redis } from "@/lib/redis";
 
export async function GET() {
  const [totalRequests, newContacts, rateLimitHits, cacheKeys] =
    await Promise.all([
      redis.get<number>("metrics:total-requests"),
      redis.get<number>("metrics:new-contacts"),
      redis.get<number>("metrics:rate-limit-hits"),
      redis.keys("cache:*"),
    ]);
 
  return NextResponse.json({
    totalRequests: totalRequests || 0,
    newContacts: newContacts || 0,
    rateLimitHits: rateLimitHits || 0,
    cachedEntries: cacheKeys.length,
    uptime: process.uptime(),
  });
}

Dashboard Component

// src/app/dashboard/page.tsx
"use client";
 
import { useEffect, useState } from "react";
 
interface Metrics {
  totalRequests: number;
  newContacts: number;
  rateLimitHits: number;
  cachedEntries: number;
  uptime: number;
}
 
export default function DashboardPage() {
  const [metrics, setMetrics] = useState<Metrics | null>(null);
  const [loading, setLoading] = useState(true);
 
  useEffect(() => {
    const fetchMetrics = async () => {
      try {
        const res = await fetch("/api/metrics");
        const data = await res.json();
        setMetrics(data);
      } catch (err) {
        console.error("Error loading metrics:", err);
      } finally {
        setLoading(false);
      }
    };
 
    fetchMetrics();
    const interval = setInterval(fetchMetrics, 5000); // Refresh every 5 seconds
    return () => clearInterval(interval);
  }, []);
 
  if (loading) {
    return (
      <div className="flex items-center justify-center min-h-screen">
        <p className="text-gray-500">Loading metrics...</p>
      </div>
    );
  }
 
  return (
    <div className="max-w-4xl mx-auto p-8">
      <h1 className="text-3xl font-bold mb-8">Redis Dashboard</h1>
 
      <div className="grid grid-cols-2 gap-6 md:grid-cols-4">
        <MetricCard
          label="Total Requests"
          value={metrics?.totalRequests ?? 0}
          color="blue"
        />
        <MetricCard
          label="New Contacts"
          value={metrics?.newContacts ?? 0}
          color="green"
        />
        <MetricCard
          label="Rate Limit Hits"
          value={metrics?.rateLimitHits ?? 0}
          color="red"
        />
        <MetricCard
          label="Cached Entries"
          value={metrics?.cachedEntries ?? 0}
          color="purple"
        />
      </div>
    </div>
  );
}
 
function MetricCard({
  label,
  value,
  color,
}: {
  label: string;
  value: number;
  color: string;
}) {
  const colorClasses: Record<string, string> = {
    blue: "bg-blue-50 border-blue-200 text-blue-700",
    green: "bg-green-50 border-green-200 text-green-700",
    red: "bg-red-50 border-red-200 text-red-700",
    purple: "bg-purple-50 border-purple-200 text-purple-700",
  };
 
  return (
    <div className={`rounded-xl border-2 p-6 ${colorClasses[color]}`}>
      <p className="text-sm font-medium opacity-75">{label}</p>
      <p className="text-3xl font-bold mt-2">{value.toLocaleString()}</p>
    </div>
  );
}

Step 7: Test the Complete System

Test Rate Limiting

Use curl to send rapid requests and verify the rate limiter works:

# Send 12 requests quickly
for i in $(seq 1 12); do
  echo "Request $i:"
  curl -s -o /dev/null -w "HTTP %{http_code} | Remaining: " \
    http://localhost:3000/api/contacts
  echo ""
done

You should see the first 10 requests return 200 and the last two return 429.

Test Caching

# First request — no cache (slow)
time curl http://localhost:3000/api/contacts
 
# Second request — from cache (fast)
time curl http://localhost:3000/api/contacts

The first request should take about 500ms (database simulation), and the second should be nearly instantaneous.

Test Message Queues

# Create a contact — triggers an email via QStash
curl -X POST http://localhost:3000/api/contacts \
  -H "Content-Type: application/json" \
  -d '{"name": "Fatma Cherif", "email": "fatma@example.com"}'

Check your application logs to see the "Email sent" message appear a few seconds after creation.

Troubleshooting

Error "UPSTASH_REDIS_REST_URL is not defined"

Verify that the .env.local file exists at the project root and contains the correct variables. Restart the development server after any changes to .env.local.

Rate limiter does not block requests in development

In development mode, x-forwarded-for may be empty. The middleware uses 127.0.0.1 as a fallback, meaning all local requests share the same counter. This behavior is correct.

QStash webhooks fail locally

QStash needs to reach your server over the Internet. For local development, use a tunnel like ngrok:

npx ngrok http 3000

Then update NEXT_PUBLIC_APP_URL in .env.local with the ngrok URL.

Cache does not clear after a write

Verify that the invalidateCache function uses the correct prefix. Redis keys are case-sensitive.

Production Best Practices

1. Use Consistent Key Prefixes

Organize your Redis keys with descriptive prefixes for easier monitoring and debugging:

ratelimit:api:{ip}
cache:contacts:list:{query}
swr:dashboard:metrics
metrics:total-requests

2. Set Appropriate TTLs

  • Rate limiting: short TTL (1 to 5 minutes)
  • Data cache: medium TTL (30 seconds to 5 minutes)
  • Sessions: long TTL (24 hours to 7 days)
  • Temporary data: very short TTL (5 to 30 seconds)

3. Handle Redis Errors Gracefully

Redis should never be a critical point of failure. If Redis is unavailable, your application should continue to work:

export async function cachedWithFallback<T>(
  key: string,
  fetcher: () => Promise<T>,
  ttl: number = 60
): Promise<T> {
  try {
    const cached = await redis.get<T>(`cache:${key}`);
    if (cached) return cached;
  } catch {
    // Redis unavailable — continue without cache
    console.warn("Redis unavailable, executing without cache");
  }
 
  const data = await fetcher();
 
  try {
    await redis.set(`cache:${key}`, JSON.stringify(data), { ex: ttl });
  } catch {
    // Silent — data is still returned
  }
 
  return data;
}

4. Monitor Usage

The free Upstash plan offers 10,000 requests per day. For production applications:

  • Use the Upstash dashboard to monitor consumption
  • Configure overage alerts
  • Optimize the number of commands per operation (use Redis pipelines)

Deploying to Vercel

Deployment is straightforward thanks to Upstash's serverless nature:

  1. Push your code to GitHub
  2. Connect the repository to Vercel
  3. Add the environment variables in Vercel settings
  4. Deploy

Upstash also offers a native Vercel integration that automatically configures environment variables:

# Install the Upstash integration from the Vercel marketplace
# UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN
# will be automatically added to your project

Next Steps

Now that you have mastered the basics of Upstash Redis with Next.js, here are some paths to explore further:

  • Session management — replace cookie-based sessions with Redis sessions for better scalability
  • Leaderboards and rankings — use Redis Sorted Sets for real-time leaderboards
  • Real-time Pub/Sub — combine Upstash Redis with Server-Sent Events for real-time push
  • Full page caching — cache HTML responses for dynamic static pages
  • Integration with Drizzle ORM — combine Redis caching with your data access layer

Conclusion

You have learned how to integrate Upstash Redis into a Next.js application to solve three common problems: rate limiting, caching, and message queues. Upstash's serverless approach eliminates operational complexity — no Redis server to manage, no persistent connection to maintain, and per-request billing that makes the service accessible even for small projects.

The patterns presented in this tutorial — cache-aside, stale-while-revalidate, and verified webhooks — are solid foundations for any production application. Combine them according to your needs: rate limiting to protect, caching to accelerate, and message queues to distribute work.


Want to read more tutorials? Check out our latest tutorial on Integrating AI SDK for Computer Use.

Discuss Your Project with Us

We're here to help with your web development needs. Schedule a call to discuss your project and how we can assist you.

Let's find the best solutions for your needs.

Related Articles

Build Production Background Jobs with Trigger.dev v3 and Next.js

Learn how to build reliable background jobs, scheduled tasks, and multi-step workflows using Trigger.dev v3 with Next.js. This tutorial covers task creation, error handling, retries, scheduled cron jobs, and deploying to production.

28 min read·