Build a Real-Time Video and Audio App with LiveKit and Next.js

Real-time video and audio communication has become a core requirement in modern applications — from video conferencing to AI voice agents to live streaming. But building these systems from scratch using raw WebRTC is extremely complex: you need to manage TURN/STUN servers, negotiate sessions, and handle network edge cases.
LiveKit solves this by providing open-source infrastructure for real-time communication. It handles all WebRTC complexity and offers simple APIs for creating rooms, managing participants, and streaming media. With ready-made React components, you can build a full video calling app with minimal effort.
In this guide, you will build a video conferencing application that supports:
- Multi-participant rooms with video and audio
- Screen sharing
- Microphone and camera controls
- Grid layout for participants
- Secure access token generation from the server
- Responsive and modern UI
Prerequisites
Before starting, ensure you have:
- Node.js 20+ installed
- Basic knowledge of React and TypeScript
- Familiarity with Next.js App Router
- A free LiveKit Cloud account (or a local LiveKit server via Docker)
- A code editor (VS Code recommended)
What You Will Build
A complete video conferencing application featuring:
- Join page — user enters their name and room name, then joins
- Video room — grid view of all participants with their live streams
- Toolbar — buttons to control microphone, camera, screen sharing, and leaving
- Secure API route — JWT access token generation on the server side
- Connection status — visual indicators for each participant's state
Step 1: Create a Next.js Project
Create a new Next.js 15 project:
npx create-next-app@latest livekit-video --typescript --tailwind --eslint --app --src-dir --use-npm
cd livekit-videoInstall LiveKit packages:
npm install livekit-client livekit-server-sdk @livekit/components-react @livekit/components-styles- livekit-client — client library for communicating with the LiveKit server
- livekit-server-sdk — server-side access token generation
- @livekit/components-react — ready-made React components for video and audio
- @livekit/components-styles — default CSS styles for the components
Step 2: Set Up Environment Variables
Create a .env.local file at the project root:
LIVEKIT_URL=wss://your-project.livekit.cloud
LIVEKIT_API_KEY=your_api_key
LIVEKIT_API_SECRET=your_api_secretGetting Your Keys
- Sign up at LiveKit Cloud
- Create a new project
- Copy the URL, API Key, and API Secret from the dashboard
Alternatively, you can run a LiveKit server locally using Docker:
docker run --rm -p 7880:7880 -p 7881:7881 -p 7882:7882/udp \
-e LIVEKIT_KEYS="devkey: secret" \
livekit/livekit-serverIn that case, use:
LIVEKIT_URL=ws://localhost:7880
LIVEKIT_API_KEY=devkey
LIVEKIT_API_SECRET=secretStep 3: Create the Token API Route
Create src/app/api/token/route.ts:
import { AccessToken } from "livekit-server-sdk";
import { NextRequest, NextResponse } from "next/server";
export async function POST(request: NextRequest) {
const { roomName, participantName } = await request.json();
if (!roomName || !participantName) {
return NextResponse.json(
{ error: "roomName and participantName are required" },
{ status: 400 }
);
}
const apiKey = process.env.LIVEKIT_API_KEY;
const apiSecret = process.env.LIVEKIT_API_SECRET;
if (!apiKey || !apiSecret) {
return NextResponse.json(
{ error: "Server misconfigured" },
{ status: 500 }
);
}
const token = new AccessToken(apiKey, apiSecret, {
identity: participantName,
name: participantName,
});
token.addGrant({
room: roomName,
roomJoin: true,
canPublish: true,
canSubscribe: true,
canPublishData: true,
});
const jwt = await token.toJwt();
return NextResponse.json({ token: jwt });
}This route:
- Receives the room name and participant name from a POST request
- Validates that all required fields are present
- Creates a JWT access token using the LiveKit Server SDK
- Grants permissions — join room, publish, and subscribe
- Returns the token to the client
The token grants full permissions: publishing, subscribing, and sending data. In production, customize permissions based on user roles.
Step 4: Build the Join Page
Create src/app/page.tsx:
"use client";
import { useState, FormEvent } from "react";
import { useRouter } from "next/navigation";
export default function JoinPage() {
const [participantName, setParticipantName] = useState("");
const [roomName, setRoomName] = useState("");
const [isLoading, setIsLoading] = useState(false);
const router = useRouter();
async function handleJoin(e: FormEvent) {
e.preventDefault();
if (!participantName.trim() || !roomName.trim()) return;
setIsLoading(true);
const params = new URLSearchParams({
room: roomName.trim(),
name: participantName.trim(),
});
router.push(`/room?${params.toString()}`);
}
return (
<main className="min-h-screen flex items-center justify-center bg-gray-950">
<div className="w-full max-w-md p-8 bg-gray-900 rounded-2xl shadow-2xl">
<h1 className="text-3xl font-bold text-white text-center mb-2">
LiveKit Video
</h1>
<p className="text-gray-400 text-center mb-8">
Join a video call room
</p>
<form onSubmit={handleJoin} className="space-y-5">
<div>
<label
htmlFor="name"
className="block text-sm font-medium text-gray-300 mb-1"
>
Your Name
</label>
<input
id="name"
type="text"
value={participantName}
onChange={(e) => setParticipantName(e.target.value)}
placeholder="Enter your name"
className="w-full px-4 py-3 bg-gray-800 border border-gray-700 rounded-lg text-white placeholder-gray-500 focus:outline-none focus:ring-2 focus:ring-blue-500"
required
/>
</div>
<div>
<label
htmlFor="room"
className="block text-sm font-medium text-gray-300 mb-1"
>
Room Name
</label>
<input
id="room"
type="text"
value={roomName}
onChange={(e) => setRoomName(e.target.value)}
placeholder="Enter room name"
className="w-full px-4 py-3 bg-gray-800 border border-gray-700 rounded-lg text-white placeholder-gray-500 focus:outline-none focus:ring-2 focus:ring-blue-500"
required
/>
</div>
<button
type="submit"
disabled={isLoading}
className="w-full py-3 bg-blue-600 hover:bg-blue-700 disabled:bg-blue-800 text-white font-semibold rounded-lg transition-colors"
>
{isLoading ? "Joining..." : "Join Room"}
</button>
</form>
</div>
</main>
);
}A simple page with a form to enter the participant name and room name. On submit, the user is directed to the room page with the information in URL parameters.
Step 5: Build the Video Room Component
Create src/components/VideoRoom.tsx:
"use client";
import { useEffect, useState } from "react";
import {
LiveKitRoom,
VideoConference,
RoomAudioRenderer,
} from "@livekit/components-react";
import "@livekit/components-styles";
interface VideoRoomProps {
roomName: string;
participantName: string;
onLeave: () => void;
}
export function VideoRoom({
roomName,
participantName,
onLeave,
}: VideoRoomProps) {
const [token, setToken] = useState<string | null>(null);
const [error, setError] = useState<string | null>(null);
useEffect(() => {
async function fetchToken() {
try {
const response = await fetch("/api/token", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ roomName, participantName }),
});
if (!response.ok) {
throw new Error("Failed to get access token");
}
const data = await response.json();
setToken(data.token);
} catch (err) {
setError(err instanceof Error ? err.message : "Connection error");
}
}
fetchToken();
}, [roomName, participantName]);
if (error) {
return (
<div className="min-h-screen flex items-center justify-center bg-gray-950">
<div className="text-center">
<p className="text-red-400 text-lg mb-4">{error}</p>
<button
onClick={onLeave}
className="px-6 py-2 bg-gray-700 text-white rounded-lg hover:bg-gray-600"
>
Go Back
</button>
</div>
</div>
);
}
if (!token) {
return (
<div className="min-h-screen flex items-center justify-center bg-gray-950">
<div className="text-white text-lg">Connecting to room...</div>
</div>
);
}
return (
<LiveKitRoom
token={token}
serverUrl={process.env.NEXT_PUBLIC_LIVEKIT_URL}
connect={true}
onDisconnected={onLeave}
data-lk-theme="default"
style={{ height: "100vh" }}
>
<VideoConference />
<RoomAudioRenderer />
</LiveKitRoom>
);
}This component:
- Fetches the access token from the API route on mount
- Shows loading state while fetching the token
- Shows error with a back button if the connection fails
- Connects to the room via
LiveKitRoomonce the token is obtained - Renders the conference UI using the pre-built
VideoConference
The VideoConference component from LiveKit provides a complete UI including participant display, toolbar, and automatic screen sharing.
Step 6: Build the Room Page
Create src/app/room/page.tsx:
"use client";
import { useSearchParams, useRouter } from "next/navigation";
import { Suspense } from "react";
import { VideoRoom } from "@/components/VideoRoom";
function RoomContent() {
const searchParams = useSearchParams();
const router = useRouter();
const roomName = searchParams.get("room");
const participantName = searchParams.get("name");
if (!roomName || !participantName) {
router.push("/");
return null;
}
return (
<VideoRoom
roomName={roomName}
participantName={participantName}
onLeave={() => router.push("/")}
/>
);
}
export default function RoomPage() {
return (
<Suspense
fallback={
<div className="min-h-screen flex items-center justify-center bg-gray-950">
<div className="text-white text-lg">Loading...</div>
</div>
}
>
<RoomContent />
</Suspense>
);
}The room page extracts the information from the URL and passes it to the VideoRoom component. If information is missing, the user is redirected to the join page.
Step 7: Add the Public Environment Variable
Add NEXT_PUBLIC_LIVEKIT_URL to your .env.local:
NEXT_PUBLIC_LIVEKIT_URL=wss://your-project.livekit.cloud
LIVEKIT_URL=wss://your-project.livekit.cloud
LIVEKIT_API_KEY=your_api_key
LIVEKIT_API_SECRET=your_api_secretThe NEXT_PUBLIC_ prefix makes the variable available in client-side code — required for connecting LiveKitRoom to the server.
Step 8: Build a Custom Video Component
The pre-built VideoConference component is great for a quick start. But for full UI customization, build your own. Create src/components/CustomVideoRoom.tsx:
"use client";
import { useEffect, useState } from "react";
import {
LiveKitRoom,
RoomAudioRenderer,
GridLayout,
ParticipantTile,
useTracks,
useParticipants,
TrackToggle,
DisconnectButton,
} from "@livekit/components-react";
import "@livekit/components-styles";
import { Track } from "livekit-client";
interface CustomVideoRoomProps {
roomName: string;
participantName: string;
onLeave: () => void;
}
function StageArea() {
const tracks = useTracks(
[
{ source: Track.Source.Camera, withPlaceholder: true },
{ source: Track.Source.ScreenShare, withPlaceholder: false },
],
{ onlySubscribed: false }
);
return (
<GridLayout
tracks={tracks}
style={{ height: "calc(100vh - 80px)" }}
>
<ParticipantTile />
</GridLayout>
);
}
function CustomControlBar() {
const participants = useParticipants();
return (
<div className="h-20 bg-gray-900 border-t border-gray-800 flex items-center justify-between px-6">
<div className="text-gray-400 text-sm">
{participants.length} participant{participants.length !== 1 ? "s" : ""}
</div>
<div className="flex items-center gap-3">
<TrackToggle
source={Track.Source.Microphone}
className="px-4 py-2 bg-gray-700 hover:bg-gray-600 text-white rounded-full transition-colors"
/>
<TrackToggle
source={Track.Source.Camera}
className="px-4 py-2 bg-gray-700 hover:bg-gray-600 text-white rounded-full transition-colors"
/>
<TrackToggle
source={Track.Source.ScreenShare}
className="px-4 py-2 bg-gray-700 hover:bg-gray-600 text-white rounded-full transition-colors"
/>
<DisconnectButton className="px-4 py-2 bg-red-600 hover:bg-red-700 text-white rounded-full transition-colors">
Leave
</DisconnectButton>
</div>
<div className="w-24" />
</div>
);
}
export function CustomVideoRoom({
roomName,
participantName,
onLeave,
}: CustomVideoRoomProps) {
const [token, setToken] = useState<string | null>(null);
const [error, setError] = useState<string | null>(null);
useEffect(() => {
async function fetchToken() {
try {
const response = await fetch("/api/token", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ roomName, participantName }),
});
if (!response.ok) throw new Error("Failed to get token");
const data = await response.json();
setToken(data.token);
} catch (err) {
setError(err instanceof Error ? err.message : "Error");
}
}
fetchToken();
}, [roomName, participantName]);
if (error) {
return (
<div className="min-h-screen flex items-center justify-center bg-gray-950">
<div className="text-center">
<p className="text-red-400 text-lg mb-4">{error}</p>
<button
onClick={onLeave}
className="px-6 py-2 bg-gray-700 text-white rounded-lg"
>
Go Back
</button>
</div>
</div>
);
}
if (!token) {
return (
<div className="min-h-screen flex items-center justify-center bg-gray-950">
<div className="text-white">Connecting...</div>
</div>
);
}
return (
<LiveKitRoom
token={token}
serverUrl={process.env.NEXT_PUBLIC_LIVEKIT_URL}
connect={true}
onDisconnected={onLeave}
data-lk-theme="default"
style={{ height: "100vh" }}
>
<div className="flex flex-col h-screen bg-gray-950">
<StageArea />
<CustomControlBar />
</div>
<RoomAudioRenderer />
</LiveKitRoom>
);
}The key differences from the pre-built component:
StageArea— usesuseTracksto fetch camera and screen share tracks, rendering them in a grid viaGridLayoutCustomControlBar— fully custom toolbar with participant count and styled control buttonsTrackToggle— a pre-built component that toggles track state (on/off) with automatic icon updates
Step 9: Add Room Events and Connection State Handling
To add room event handling and participant state tracking, create src/hooks/useRoomEvents.ts:
import { useEffect } from "react";
import { useRoomContext } from "@livekit/components-react";
import { RoomEvent, ConnectionState } from "livekit-client";
export function useRoomEvents() {
const room = useRoomContext();
useEffect(() => {
function handleParticipantConnected(participant: any) {
console.log(`${participant.identity} joined the room`);
}
function handleParticipantDisconnected(participant: any) {
console.log(`${participant.identity} left the room`);
}
function handleConnectionStateChanged(state: ConnectionState) {
console.log(`Connection state: ${state}`);
}
room.on(RoomEvent.ParticipantConnected, handleParticipantConnected);
room.on(RoomEvent.ParticipantDisconnected, handleParticipantDisconnected);
room.on(
RoomEvent.ConnectionStateChanged,
handleConnectionStateChanged
);
return () => {
room.off(RoomEvent.ParticipantConnected, handleParticipantConnected);
room.off(
RoomEvent.ParticipantDisconnected,
handleParticipantDisconnected
);
room.off(
RoomEvent.ConnectionStateChanged,
handleConnectionStateChanged
);
};
}, [room]);
}Use this hook inside any component wrapped by LiveKitRoom:
function StageArea() {
useRoomEvents(); // track events
const tracks = useTracks([
{ source: Track.Source.Camera, withPlaceholder: true },
{ source: Track.Source.ScreenShare, withPlaceholder: false },
]);
return (
<GridLayout tracks={tracks}>
<ParticipantTile />
</GridLayout>
);
}Key LiveKit room events:
| Event | Description |
|---|---|
ParticipantConnected | A new participant joined |
ParticipantDisconnected | A participant left |
TrackSubscribed | Started receiving a media track |
TrackUnsubscribed | Stopped receiving a track |
ConnectionStateChanged | Connection state changed |
DataReceived | Received a data message |
ActiveSpeakersChanged | Active speakers changed |
Step 10: Send and Receive Text Messages
LiveKit supports sending data between participants via the Data Channel. Create src/components/Chat.tsx:
"use client";
import { useState, useEffect, useRef, FormEvent } from "react";
import { useRoomContext } from "@livekit/components-react";
import { RoomEvent } from "livekit-client";
interface ChatMessage {
sender: string;
text: string;
timestamp: number;
}
export function Chat() {
const room = useRoomContext();
const [messages, setMessages] = useState<ChatMessage[]>([]);
const [input, setInput] = useState("");
const scrollRef = useRef<HTMLDivElement>(null);
useEffect(() => {
function handleDataReceived(
payload: Uint8Array,
participant: any
) {
const text = new TextDecoder().decode(payload);
const message: ChatMessage = {
sender: participant?.identity || "Unknown",
text,
timestamp: Date.now(),
};
setMessages((prev) => [...prev, message]);
}
room.on(RoomEvent.DataReceived, handleDataReceived);
return () => {
room.off(RoomEvent.DataReceived, handleDataReceived);
};
}, [room]);
useEffect(() => {
scrollRef.current?.scrollIntoView({ behavior: "smooth" });
}, [messages]);
async function sendMessage(e: FormEvent) {
e.preventDefault();
if (!input.trim()) return;
const data = new TextEncoder().encode(input.trim());
await room.localParticipant.publishData(data, {
reliable: true,
});
setMessages((prev) => [
...prev,
{
sender: room.localParticipant.identity,
text: input.trim(),
timestamp: Date.now(),
},
]);
setInput("");
}
return (
<div className="w-80 bg-gray-900 border-l border-gray-800 flex flex-col">
<div className="p-4 border-b border-gray-800">
<h3 className="text-white font-semibold">Chat</h3>
</div>
<div className="flex-1 overflow-y-auto p-4 space-y-3">
{messages.map((msg, i) => (
<div key={i} className="text-sm">
<span className="font-medium text-blue-400">
{msg.sender}:
</span>{" "}
<span className="text-gray-300">{msg.text}</span>
</div>
))}
<div ref={scrollRef} />
</div>
<form
onSubmit={sendMessage}
className="p-4 border-t border-gray-800"
>
<div className="flex gap-2">
<input
type="text"
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Type a message..."
className="flex-1 px-3 py-2 bg-gray-800 border border-gray-700 rounded-lg text-white text-sm placeholder-gray-500 focus:outline-none focus:ring-1 focus:ring-blue-500"
/>
<button
type="submit"
className="px-4 py-2 bg-blue-600 text-white text-sm rounded-lg hover:bg-blue-700"
>
Send
</button>
</div>
</form>
</div>
);
}This component adds text chat inside the room:
publishData— sends data to all participants via the WebRTC data channelDataReceived— event fired when data is received from another participantreliable: true— uses a reliable data channel (like TCP) to guarantee message delivery
To integrate the chat with the custom video room:
<LiveKitRoom token={token} serverUrl={url} connect={true}>
<div className="flex h-screen bg-gray-950">
<div className="flex-1 flex flex-col">
<StageArea />
<CustomControlBar />
</div>
<Chat />
</div>
<RoomAudioRenderer />
</LiveKitRoom>Step 11: Add Pre-Join Settings
A better user experience includes previewing the camera and microphone before joining. Create src/components/PreJoin.tsx:
"use client";
import { useState, useEffect, useRef } from "react";
interface PreJoinProps {
onJoin: (settings: {
videoEnabled: boolean;
audioEnabled: boolean;
}) => void;
participantName: string;
}
export function PreJoin({ onJoin, participantName }: PreJoinProps) {
const [videoEnabled, setVideoEnabled] = useState(true);
const [audioEnabled, setAudioEnabled] = useState(true);
const [stream, setStream] = useState<MediaStream | null>(null);
const videoRef = useRef<HTMLVideoElement>(null);
useEffect(() => {
async function getMedia() {
try {
const mediaStream =
await navigator.mediaDevices.getUserMedia({
video: videoEnabled,
audio: audioEnabled,
});
setStream(mediaStream);
if (videoRef.current) {
videoRef.current.srcObject = mediaStream;
}
} catch (err) {
console.error("Failed to access media devices:", err);
}
}
getMedia();
return () => {
stream?.getTracks().forEach((track) => track.stop());
};
}, [videoEnabled, audioEnabled]);
return (
<div className="min-h-screen flex items-center justify-center bg-gray-950">
<div className="w-full max-w-lg p-8 bg-gray-900 rounded-2xl">
<h2 className="text-xl font-bold text-white text-center mb-6">
Get Ready to Join
</h2>
<div className="aspect-video bg-gray-800 rounded-xl overflow-hidden mb-6 relative">
{videoEnabled ? (
<video
ref={videoRef}
autoPlay
muted
playsInline
className="w-full h-full object-cover"
/>
) : (
<div className="w-full h-full flex items-center justify-center">
<div className="w-20 h-20 bg-gray-700 rounded-full flex items-center justify-center">
<span className="text-2xl text-white">
{participantName[0]?.toUpperCase()}
</span>
</div>
</div>
)}
</div>
<div className="flex justify-center gap-4 mb-6">
<button
onClick={() => setAudioEnabled(!audioEnabled)}
className={`px-4 py-2 rounded-full transition-colors ${
audioEnabled
? "bg-gray-700 text-white"
: "bg-red-600 text-white"
}`}
>
{audioEnabled ? "Mic On" : "Mic Off"}
</button>
<button
onClick={() => setVideoEnabled(!videoEnabled)}
className={`px-4 py-2 rounded-full transition-colors ${
videoEnabled
? "bg-gray-700 text-white"
: "bg-red-600 text-white"
}`}
>
{videoEnabled ? "Camera On" : "Camera Off"}
</button>
</div>
<button
onClick={() => {
stream?.getTracks().forEach((track) => track.stop());
onJoin({ videoEnabled, audioEnabled });
}}
className="w-full py-3 bg-blue-600 hover:bg-blue-700 text-white font-semibold rounded-lg transition-colors"
>
Join Now
</button>
</div>
</div>
);
}This component shows a camera preview and lets the user toggle the microphone and camera before joining the room.
Step 12: Run and Test the Application
Start the development server:
npm run devOpen your browser at http://localhost:3000 and follow these steps:
- Enter your name and a room name (e.g., "test-room")
- Click "Join Room"
- Allow camera and microphone access when prompted
- To test the call, open a second browser window (or a different browser) and join the same room with a different name
You should see the video stream of both participants and hear audio.
Common Troubleshooting
| Issue | Solution |
|---|---|
| Connection error | Verify LIVEKIT_URL and NEXT_PUBLIC_LIVEKIT_URL are correct |
| No video showing | Make sure camera permission is granted in the browser |
| No audio | Make sure microphone permission is granted |
| CORS error | If using a local server, ensure it is running on the correct ports |
Step 13: Deploy to Production
Deploy on Vercel
- Push your code to a Git repository:
git init
git add .
git commit -m "feat: livekit video app"
git remote add origin https://github.com/your-username/livekit-video.git
git push -u origin main-
Connect the repository on Vercel
-
Add environment variables in the project settings:
NEXT_PUBLIC_LIVEKIT_URL=wss://your-project.livekit.cloud
LIVEKIT_URL=wss://your-project.livekit.cloud
LIVEKIT_API_KEY=your_api_key
LIVEKIT_API_SECRET=your_api_secret
- Deploy the application
Deploy with Docker
Create a Dockerfile:
FROM node:20-alpine AS base
FROM base AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci
FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM base AS runner
WORKDIR /app
ENV NODE_ENV=production
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/public ./public
EXPOSE 3000
CMD ["node", "server.js"]Then build and run:
docker build -t livekit-video .
docker run -p 3000:3000 --env-file .env.local livekit-videoAdvanced Features
Call Recording
LiveKit supports call recording via the Egress API:
import { EgressClient, EncodedFileOutput } from "livekit-server-sdk";
const egressClient = new EgressClient(
process.env.LIVEKIT_URL!,
process.env.LIVEKIT_API_KEY!,
process.env.LIVEKIT_API_SECRET!
);
// Start recording a room
const output = new EncodedFileOutput({
filepath: "recordings/room-{room_name}-{time}.mp4",
// Configure S3 or GCS for storage
});
await egressClient.startRoomCompositeEgress(roomName, { file: output });AI Voice Agent
You can build an AI voice agent that joins the room and responds with audio using LiveKit Agents:
# agents/voice_agent.py (Python SDK)
from livekit.agents import AutoSubscribe, JobContext, WorkerOptions, cli
from livekit.agents.voice_assistant import VoiceAssistant
from livekit.plugins import openai, silero
async def entrypoint(ctx: JobContext):
await ctx.connect(auto_subscribe=AutoSubscribe.AUDIO_ONLY)
assistant = VoiceAssistant(
vad=silero.VAD.load(),
stt=openai.STT(),
llm=openai.LLM(),
tts=openai.TTS(),
)
assistant.start(ctx.room)
await assistant.say("Hello! How can I help you?")This opens massive possibilities for building interactive voice assistants, customer support bots, and voice-powered educational tools.
Next Steps
After completing this guide, you can:
- Add authentication — use NextAuth or Better Auth to protect rooms
- Add waiting rooms — let the host accept participants before they enter
- Build an AI agent — use LiveKit Agents to add an intelligent voice assistant
- Add recording — record calls and store them in S3
- Optimize performance — use Simulcast to adapt quality to network speed
- Add a whiteboard — integrate a collaborative drawing tool in the room
Conclusion
In this guide, we built a complete video conferencing application using LiveKit and Next.js. We learned how to generate access tokens from the server, build a video UI using pre-built React components, customize the toolbar, add text chat via data channels, and build a pre-join preview screen.
LiveKit dramatically simplifies building real-time video and audio applications. Its open-source architecture and ready-made React libraries make it easy to get started quickly, while its advanced APIs (Agents, Egress, Ingress) give you the power to build anything from simple video conferences to sophisticated AI voice agents.
Discuss Your Project with Us
We're here to help with your web development needs. Schedule a call to discuss your project and how we can assist you.
Let's find the best solutions for your needs.
Related Articles

Build a Full-Stack App with Firebase and Next.js 15: Auth, Firestore & Real-Time
Learn how to build a full-stack app with Next.js 15 and Firebase. This guide covers authentication, Firestore, real-time updates, Server Actions, and deployment to Vercel.

Building an Autonomous AI Agent with Agentic RAG and Next.js
Learn how to build an AI agent that autonomously decides when and how to retrieve information from vector databases. A comprehensive hands-on guide using Vercel AI SDK and Next.js with executable examples.

Build a Semantic Search Engine with Next.js 15, OpenAI, and Pinecone
Learn how to build a production-ready semantic search engine using Next.js 15, OpenAI Embeddings, and Pinecone vector database. This comprehensive tutorial covers setup, indexing, querying, and deployment.