writing/tutorial/2026/05
TutorialMay 8, 2026·28 min read

Vercel AI Elements: Pre-built React Components for AI Chat UIs in Next.js

Stop hand-rolling chat bubbles, scroll buttons, and reasoning toggles. Vercel AI Elements ships every primitive you need for a production AI UI as shadcn-style components you own. This tutorial wires Conversation, Message, PromptInput, Reasoning, and Tool into a Next.js 15 chat with the AI SDK 5 — including attachments, model picker, and streaming reasoning.

Every AI app shipped in 2025 ended up reinventing the same chat UI: a scroll container that auto-pins to the bottom, message bubbles that swap by role, a textarea that grows, a submit button that flips to a stop button mid-stream, a collapsible reasoning panel, attachment chips. None of it is hard, but together it eats a sprint and the result still looks slightly off.

Vercel AI Elements is the answer to that. It is a component library — built on shadcn/ui and tuned for the AI SDK — that ships every one of those primitives as installable React components you own in your repo. You run one CLI command, you get the source code dropped into components/ai-elements, and you wire them up against useChat. By the end of this tutorial you will have a Next.js 15 chat app with streaming responses, reasoning toggles, file attachments, a model selector, and a web-search action button — without writing a single piece of UI from scratch.

AI Elements is not a black box. Unlike most chat libraries, AI Elements does not ship as npm install. It uses the shadcn pattern: the CLI copies real source files into your project. You can edit anything, restyle anything, delete what you do not need. Vercel maintains the registry, you own the code.

What You Will Learn

By the end of this tutorial, you will be able to:

  • Install AI Elements components into a Next.js 15 App Router project via the registry CLI
  • Build a /api/chat route using the AI SDK 5 streamText and toUIMessageStreamResponse helpers
  • Wire Conversation, Message, and MessageResponse to render streaming AI output
  • Use PromptInput with PromptInputTextarea, PromptInputSubmit, and PromptInputBody for a polished input
  • Add file attachments with PromptInputActionAddAttachments and the Attachments primitives
  • Render model reasoning with Reasoning, ReasoningTrigger, and ReasoningContent
  • Display tool calls and web sources using Tool, Source, and Sources

Prerequisites

Before starting, ensure you have:

  • Node.js 20 or later (the AI SDK 5 ESM build refuses earlier versions)
  • Next.js 15 App Router familiarity (Server Components vs Client Components)
  • An OpenAI or Anthropic API key — we will use OpenAI but the swap is one line
  • shadcn/ui already initialized (or you let the AI Elements CLI initialize it)
  • Comfort with React hooks and TypeScript

What You Will Build

A single-page chat app called Noqta Chat with a real conversation thread, streaming text, expandable reasoning, image and PDF attachments, a model picker between GPT-4o and Claude Sonnet 4.6, and a "Search the web" toggle that flips behavior server-side. The whole UI will be composed from AI Elements primitives that live in your components/ai-elements/ folder.

Step 1: Project Setup

Spin up a fresh Next.js 15 project with TypeScript and Tailwind:

npx create-next-app@latest noqta-chat \
  --typescript --tailwind --app --eslint --src-dir
cd noqta-chat

Initialize shadcn/ui (AI Elements needs it as a base layer):

npx shadcn@latest init

Pick Neutral as the base color and CSS variables when prompted. This sets up globals.css, lib/utils.ts, and components.json.

Now install AI Elements. The fastest path is the dedicated CLI, which adds every component at once:

npx ai-elements@latest

You should see new files appear in components/ai-elements/: conversation.tsx, message.tsx, prompt-input.tsx, reasoning.tsx, tool.tsx, source.tsx, attachments.tsx, code-block.tsx, and a few others. These are real source files — open one and you will see standard shadcn-style components built on Radix primitives.

Prefer to install one component at a time? Use npx ai-elements@latest add conversation to add only the conversation primitives, or use the underlying shadcn registry directly: npx shadcn@latest add https://elements.ai-sdk.dev/api/registry/all.json. Both produce identical output.

Step 2: Install the AI SDK 5

AI Elements is designed to plug into the AI SDK 5 useChat hook. Install the runtime, the React bindings, and at least one provider:

npm install ai @ai-sdk/react @ai-sdk/openai @ai-sdk/anthropic zod

Add your keys to .env.local:

OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...

Restart npm run dev so Next.js picks up the new env vars.

Step 3: The Chat API Route

The AI SDK 5 streams UI messages — a structured format where each message is a list of typed parts (text, reasoning, tool, source, file). AI Elements components read those parts directly, so the server only needs to forward the model stream as a UI message stream.

Create src/app/api/chat/route.ts:

import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";
import {
  convertToModelMessages,
  streamText,
  type UIMessage,
} from "ai";
 
export const maxDuration = 30;
 
type ChatBody = {
  messages: UIMessage[];
  model?: string;
  webSearch?: boolean;
};
 
export async function POST(req: Request) {
  const { messages, model = "gpt-4o", webSearch = false }: ChatBody =
    await req.json();
 
  const provider = model.startsWith("claude")
    ? anthropic(model)
    : openai(model);
 
  const result = streamText({
    model: provider,
    system: webSearch
      ? "You can search the web. Cite sources you used."
      : "You are a helpful assistant. Be concise and accurate.",
    messages: convertToModelMessages(messages),
  });
 
  return result.toUIMessageStreamResponse({
    sendReasoning: true,
    sendSources: true,
  });
}

Two things matter here:

  1. convertToModelMessages turns the UI message format (parts array) into the chat completion format every provider expects.
  2. toUIMessageStreamResponse produces the framed stream that useChat and AI Elements consume. Passing sendReasoning: true is what surfaces Claude's extended thinking; sendSources: true lets web-search results flow as source parts.

Step 4: The Bare Conversation Shell

Replace src/app/page.tsx with a Client Component that renders the conversation skeleton. We will start minimal and layer features in later steps.

"use client";
 
import { useState } from "react";
import { useChat } from "@ai-sdk/react";
import {
  Conversation,
  ConversationContent,
  ConversationEmptyState,
  ConversationScrollButton,
} from "@/components/ai-elements/conversation";
import {
  Message,
  MessageContent,
  MessageResponse,
} from "@/components/ai-elements/message";
import {
  PromptInput,
  type PromptInputMessage,
  PromptInputTextarea,
  PromptInputSubmit,
} from "@/components/ai-elements/prompt-input";
import { MessageSquare } from "lucide-react";
 
export default function ChatPage() {
  const [input, setInput] = useState("");
  const { messages, sendMessage, status } = useChat();
 
  const handleSubmit = (message: PromptInputMessage) => {
    if (!message.text.trim()) return;
    sendMessage({ text: message.text });
    setInput("");
  };
 
  return (
    <main className="mx-auto flex h-dvh max-w-3xl flex-col p-6">
      <Conversation className="flex-1">
        <ConversationContent>
          {messages.length === 0 ? (
            <ConversationEmptyState
              icon={<MessageSquare className="size-12" />}
              title="Start a conversation"
              description="Ask Noqta Chat anything"
            />
          ) : (
            messages.map((message) => (
              <Message from={message.role} key={message.id}>
                <MessageContent>
                  {message.parts.map((part, i) =>
                    part.type === "text" ? (
                      <MessageResponse key={`${message.id}-${i}`}>
                        {part.text}
                      </MessageResponse>
                    ) : null
                  )}
                </MessageContent>
              </Message>
            ))
          )}
        </ConversationContent>
        <ConversationScrollButton />
      </Conversation>
 
      <PromptInput onSubmit={handleSubmit} className="mt-4">
        <PromptInputTextarea
          value={input}
          placeholder="Send a message..."
          onChange={(e) => setInput(e.currentTarget.value)}
        />
        <PromptInputSubmit
          status={status === "streaming" ? "streaming" : "ready"}
          disabled={!input.trim()}
        />
      </PromptInput>
    </main>
  );
}

Run npm run dev, open http://localhost:3000, and you should already have a working chat. Conversation handles the auto-scroll behavior, ConversationScrollButton shows a "jump to latest" pill when you scroll up, and MessageResponse handles streaming markdown rendering. The whole shell is fewer than 60 lines of JSX.

Step 5: Iterating Over Message Parts the Right Way

The pattern in Step 4 only renders text parts. Real responses also include reasoning, tool-*, source-url, and file parts. Refactor your message rendering into a dedicated component so each tutorial step can grow it cleanly.

Create src/components/chat-message-parts.tsx:

"use client";
 
import type { UIMessage } from "ai";
import {
  MessageResponse,
} from "@/components/ai-elements/message";
 
export function ChatMessageParts({ message }: { message: UIMessage }) {
  return (
    <>
      {message.parts.map((part, i) => {
        const key = `${message.id}-${i}`;
 
        switch (part.type) {
          case "text":
            return <MessageResponse key={key}>{part.text}</MessageResponse>;
          default:
            return null;
        }
      })}
    </>
  );
}

Swap the inline .map in page.tsx for <ChatMessageParts message={message} />. Now every new part type we wire up adds one case here instead of branching the page.

Step 6: Reasoning Streaming

Claude Sonnet 4.6 and GPT models with reasoning emit a separate reasoning part stream before the final text. AI Elements gives you a collapsible disclosure for it.

Update src/components/chat-message-parts.tsx:

"use client";
 
import type { UIMessage } from "ai";
import { MessageResponse } from "@/components/ai-elements/message";
import {
  Reasoning,
  ReasoningContent,
  ReasoningTrigger,
} from "@/components/ai-elements/reasoning";
 
type Props = {
  message: UIMessage;
  isLastMessage: boolean;
  isStreaming: boolean;
};
 
export function ChatMessageParts({ message, isLastMessage, isStreaming }: Props) {
  const reasoningParts = message.parts.filter((p) => p.type === "reasoning");
  const reasoningText = reasoningParts.map((p) => p.text).join("\n\n");
  const lastPart = message.parts.at(-1);
  const isReasoningStreaming =
    isLastMessage && isStreaming && lastPart?.type === "reasoning";
 
  return (
    <>
      {reasoningParts.length > 0 && (
        <Reasoning className="w-full" isStreaming={isReasoningStreaming}>
          <ReasoningTrigger />
          <ReasoningContent>{reasoningText}</ReasoningContent>
        </Reasoning>
      )}
 
      {message.parts.map((part, i) => {
        const key = `${message.id}-${i}`;
        if (part.type === "text") {
          return <MessageResponse key={key}>{part.text}</MessageResponse>;
        }
        return null;
      })}
    </>
  );
}

Pass the streaming flags from page.tsx:

{messages.map((message, index) => (
  <Message from={message.role} key={message.id}>
    <MessageContent>
      <ChatMessageParts
        message={message}
        isLastMessage={index === messages.length - 1}
        isStreaming={status === "streaming"}
      />
    </MessageContent>
  </Message>
))}

Reasoning opens automatically while streaming, animates a token shimmer, and collapses itself when the model finishes thinking — that auto-close behavior is the part everyone gets wrong when hand-rolling it.

Step 7: A Polished PromptInput with Toolbar

The bare PromptInput from Step 4 is fine, but the real value of AI Elements is the layout primitives that compose into a Claude-style toolbar. Replace your input with a header-body-footer layout:

import {
  PromptInput,
  PromptInputBody,
  PromptInputButton,
  PromptInputFooter,
  type PromptInputMessage,
  PromptInputSelect,
  PromptInputSelectContent,
  PromptInputSelectItem,
  PromptInputSelectTrigger,
  PromptInputSelectValue,
  PromptInputSubmit,
  PromptInputTextarea,
  PromptInputTools,
} from "@/components/ai-elements/prompt-input";
import { GlobeIcon } from "lucide-react";
 
const models = [
  { id: "gpt-4o", name: "GPT-4o" },
  { id: "claude-sonnet-4-6", name: "Claude Sonnet 4.6" },
];
 
// inside ChatPage:
const [model, setModel] = useState(models[0].id);
const [webSearch, setWebSearch] = useState(false);
 
const handleSubmit = (message: PromptInputMessage) => {
  if (!message.text.trim() && !message.files?.length) return;
  sendMessage(
    { text: message.text, files: message.files },
    { body: { model, webSearch } }
  );
  setInput("");
};
 
return (
  // ...Conversation above...
  <PromptInput onSubmit={handleSubmit} className="mt-4" globalDrop multiple>
    <PromptInputBody>
      <PromptInputTextarea
        value={input}
        onChange={(e) => setInput(e.currentTarget.value)}
        placeholder="Ask anything..."
      />
    </PromptInputBody>
    <PromptInputFooter>
      <PromptInputTools>
        <PromptInputButton
          onClick={() => setWebSearch(!webSearch)}
          variant={webSearch ? "default" : "ghost"}
          tooltip={{ content: "Search the web" }}
        >
          <GlobeIcon size={16} />
          <span>Search</span>
        </PromptInputButton>
 
        <PromptInputSelect value={model} onValueChange={setModel}>
          <PromptInputSelectTrigger>
            <PromptInputSelectValue />
          </PromptInputSelectTrigger>
          <PromptInputSelectContent>
            {models.map((m) => (
              <PromptInputSelectItem key={m.id} value={m.id}>
                {m.name}
              </PromptInputSelectItem>
            ))}
          </PromptInputSelectContent>
        </PromptInputSelect>
      </PromptInputTools>
      <PromptInputSubmit
        status={status === "streaming" ? "streaming" : "ready"}
      />
    </PromptInputFooter>
  </PromptInput>
);

Three things just happened:

  • The body parameter on sendMessage forwards { model, webSearch } to your route handler — that is how Step 3's body fields actually get populated.
  • PromptInputSubmit automatically swaps between a paper-plane icon and a stop-square when status === "streaming". Clicking the stop variant calls useChat's abort under the hood.
  • globalDrop multiple enables drag-and-drop attachments anywhere on the page, which we light up next.

Step 8: File Attachments

AI Elements ships an Attachments primitive that pairs with PromptInputActionAddAttachments. The hook usePromptInputAttachments exposes the live attachment state.

Create src/components/attachments-display.tsx:

"use client";
 
import {
  Attachment,
  AttachmentPreview,
  AttachmentRemove,
  Attachments,
} from "@/components/ai-elements/attachments";
import { usePromptInputAttachments } from "@/components/ai-elements/prompt-input";
 
export function AttachmentsDisplay() {
  const attachments = usePromptInputAttachments();
 
  if (attachments.files.length === 0) return null;
 
  return (
    <Attachments variant="inline">
      {attachments.files.map((attachment) => (
        <Attachment
          data={attachment}
          key={attachment.id}
          onRemove={() => attachments.remove(attachment.id)}
        >
          <AttachmentPreview />
          <AttachmentRemove />
        </Attachment>
      ))}
    </Attachments>
  );
}

Now extend the PromptInput block in page.tsx:

import {
  PromptInputActionAddAttachments,
  PromptInputActionMenu,
  PromptInputActionMenuContent,
  PromptInputActionMenuTrigger,
  PromptInputHeader,
} from "@/components/ai-elements/prompt-input";
import { AttachmentsDisplay } from "@/components/attachments-display";
 
// inside <PromptInput>:
<PromptInputHeader>
  <AttachmentsDisplay />
</PromptInputHeader>
<PromptInputBody>
  <PromptInputTextarea ... />
</PromptInputBody>
<PromptInputFooter>
  <PromptInputTools>
    <PromptInputActionMenu>
      <PromptInputActionMenuTrigger />
      <PromptInputActionMenuContent>
        <PromptInputActionAddAttachments />
      </PromptInputActionMenuContent>
    </PromptInputActionMenu>
    {/* ...web search button, model select... */}
  </PromptInputTools>
  <PromptInputSubmit ... />
</PromptInputFooter>

Drop a PNG or PDF onto the page. You will see the file render as a chip above the textarea, and on submit it lands in message.files — which the AI SDK 5 forwards to the model as a file part.

Step 9: Showing Tool Calls

When the model invokes a tool (custom tool, web search, code interpreter), the stream emits parts of type tool-<toolName>. AI Elements ships a Tool component that renders the call name, status, input, and output as a tidy collapsed card.

Add to chat-message-parts.tsx:

import {
  Tool,
  ToolContent,
  ToolHeader,
  ToolInput,
  ToolOutput,
} from "@/components/ai-elements/tool";
 
// inside the parts.map switch:
default: {
  if (part.type.startsWith("tool-")) {
    const toolPart = part as Extract<typeof part, { type: `tool-${string}` }>;
    return (
      <Tool key={key} defaultOpen={toolPart.state !== "output-available"}>
        <ToolHeader type={toolPart.type} state={toolPart.state} />
        <ToolContent>
          <ToolInput input={toolPart.input} />
          {toolPart.output !== undefined && (
            <ToolOutput
              output={toolPart.output}
              errorText={toolPart.errorText}
            />
          )}
        </ToolContent>
      </Tool>
    );
  }
  return null;
}

ToolHeader shows a status pill — "input-streaming", "input-available", "output-available", "output-error" — so the user sees the agent thinking, not a frozen UI.

Step 10: Web-Search Sources

Once sendSources: true is set on the route response (Step 3), web-search-capable models stream source-url parts you can render as a citation footer.

import {
  Source,
  Sources,
  SourcesContent,
  SourcesTrigger,
} from "@/components/ai-elements/source";
 
// at the top of ChatMessageParts return:
const sourceParts = message.parts.filter((p) => p.type === "source-url");
 
{sourceParts.length > 0 && (
  <Sources>
    <SourcesTrigger count={sourceParts.length} />
    <SourcesContent>
      {sourceParts.map((s, i) => (
        <Source
          key={`${message.id}-src-${i}`}
          href={s.url}
          title={s.title ?? s.url}
        />
      ))}
    </SourcesContent>
  </Sources>
)}

Sources is a collapsible panel under the assistant reply with one entry per cited URL. Clicking each chip opens the source in a new tab — the standard pattern Perplexity popularized.

Testing Your Implementation

  • Send "What is the capital of Tunisia?" — confirm the assistant streams text top-down with no scroll jitter.
  • Switch the model select to Claude Sonnet 4.6 and ask a math question — confirm the reasoning panel opens, animates while thinking, and collapses on completion.
  • Toggle the Search button on and ask a current-events question — confirm the Sources dropdown appears under the answer.
  • Drag a small image onto the page — confirm an attachment chip appears, then submit and verify the model references the image in its reply.
  • Mid-stream, click the stop button on PromptInputSubmit — confirm streaming halts and the partial response stays in place.

Troubleshooting

Cannot find module '@/components/ai-elements/conversation' — the AI Elements CLI did not run. Re-run npx ai-elements@latest from your project root and confirm files appear in src/components/ai-elements/.

Reasoning panel never opens — your route is missing sendReasoning: true on toUIMessageStreamResponse, or your selected model does not emit reasoning. Test with claude-sonnet-4-6.

Attachments fail with a 413 error — Next.js routes accept request bodies up to 1 MB by default. Add export const runtime = "nodejs" and increase the limit in next.config.ts under experimental.serverActions.bodySizeLimit.

UIMessage type errors after upgrading the AI SDK — AI SDK 5 renamed the field from content to parts. Make sure your version of @ai-sdk/react is at least 2.x; the AI Elements components depend on the parts-based shape.

Next Steps

  • Pair this UI with the Mastra AI agents tutorial to give the chat real tools, memory, and a workflow engine.
  • Add Resend + React Email to send a transcript when the conversation ends.
  • Wrap the chat in a CopilotKit sidebar so it lives next to your app instead of taking the whole page.
  • Persist conversations to Drizzle + Neon keyed by user — AI Elements has nothing to do with persistence, so the choice is yours.

Conclusion

The interesting work in an AI app is the agent loop, the prompt design, the tools, and the evals. The UI is solved — but only if you stop solving it yourself. AI Elements is not a framework you adopt, it is a registry you copy from. Once the components are in your repo they are yours: restyle them with your tokens, swap icons, delete the parts of PromptInput you do not need, and move on to the parts of your product that actually differentiate you. Spend your sprint on the agent, not on the auto-scroll behavior.