Getting Started
Set up Stream UI in your Next.js project in under 2 minutes
Quick Start
The fastest way to add Stream UI to your project is with the CLI scaffold:
Scaffold
npx @prototyper-ui/cli add stream-uiThis creates two files and adds your API key placeholder to .env.local:
app/api/stream-ui/route.ts— server handler that streams UI specs from an AI modelapp/stream-ui/page.tsx— ready-to-use page with the<StreamUI />component
What was scaffolded
The CLI creates a minimal but complete setup:
app/api/stream-ui/route.ts — A Next.js API route that calls an AI model and streams back JSONL patches:
import { createStreamUIHandler } from "@prototyperai/stream-ui/server"
export const POST = createStreamUIHandler({
provider: "anthropic",
})app/stream-ui/page.tsx — A client page that renders the streamed UI:
"use client"
import { StreamUI } from "@prototyperai/stream-ui/react"
export default function StreamUIPage() {
return (
<div className="mx-auto max-w-4xl p-8">
<h1 className="mb-4 text-2xl font-bold">Stream UI</h1>
<p className="mb-8 text-muted-foreground">
Describe a UI and watch it appear in real-time.
</p>
<StreamUI endpoint="/api/stream-ui" />
</div>
)
}The StreamUI Component
<StreamUI /> is an all-in-one component that handles prompt input, streaming, spec assembly, and rendering. It connects to your API route and manages the full lifecycle.
import { StreamUI } from "@prototyperai/stream-ui/react"
<StreamUI endpoint="/api/stream-ui" />Props
| Prop | Type | Default | Description |
|---|---|---|---|
endpoint | string | — | Required. URL of the stream-ui API route. |
placeholder | string | "Describe a UI..." | Placeholder text for the prompt input. |
onSpec | (spec: Spec) => void | — | Called when a new spec is received. |
onError | (error: Error) => void | — | Called on streaming errors. |
className | string | — | Additional CSS classes for the wrapper. |
Server Handler
createStreamUIHandler creates a Next.js-compatible POST handler. It builds a system prompt from the component catalog, calls the AI provider, and streams JSONL patches back to the client.
import { createStreamUIHandler } from "@prototyperai/stream-ui/server"
export const POST = createStreamUIHandler({
provider: "anthropic",
model: "claude-sonnet-4-20250514", // optional, this is the default
maxTokens: 4096, // optional
rules: ["Use a dark color scheme"], // optional extra instructions
})| Option | Type | Description |
|---|---|---|
provider | "anthropic" | "openai" | Required. Which AI provider to use. |
apiKey | string | API key. Falls back to ANTHROPIC_API_KEY or OPENAI_API_KEY env vars. |
model | string | Model identifier. Defaults to claude-sonnet-4-20250514 (Anthropic) or gpt-4o (OpenAI). |
maxTokens | number | Max tokens for the AI response. Default 4096. |
rules | string[] | Additional rules appended to the system prompt. |
systemPrompt | string | Full system prompt override (bypasses catalog-based generation). |
mode | "generate" | "chat" | Prompt mode. "generate" outputs raw JSONL; "chat" wraps in code fences. |
onRequest | (params) => override | void | Hook to modify prompts before sending to AI. |
onResponse | (params) => void | Hook called after the AI response completes. |
maxPromptLength | number | Truncate user prompts to this character length. |
Using OpenAI
To use OpenAI instead of Anthropic:
npx @prototyper-ui/cli add stream-ui --provider openaiOr change the route manually:
import { createStreamUIHandler } from "@prototyperai/stream-ui/server"
export const POST = createStreamUIHandler({
provider: "openai",
})Set the OpenAI key in .env.local:
OPENAI_API_KEY=sk-your-openai-key-hereManual Setup
If you prefer to set things up yourself instead of using the CLI scaffold:
pnpm add @prototyperai/stream-uinpm install @prototyperai/stream-uiyarn add @prototyperai/stream-uibun add @prototyperai/stream-uiStream UI requires react@^19 and react-dom@^19 as peer dependencies.
Create the two files shown in What was scaffolded above, then add your API key to .env.local.
Low-level API
For more control, use the lower-level hooks and renderer directly:
"use client"
import { Renderer, useUIStream } from "@prototyperai/stream-ui"
import { prototyperComponents } from "@prototyperai/stream-ui/components"
export function CustomStreamUI() {
const { spec, isStreaming, error, send, clear } = useUIStream({
url: "/api/stream-ui",
})
return (
<div>
<button onClick={() => send({ prompt: "Build a login form" })}>
Generate
</button>
{error && <p>Error: {error.message}</p>}
{spec && (
<Renderer
spec={spec}
registry={prototyperComponents}
loading={isStreaming}
/>
)}
</div>
)
}The Renderer component accepts these props:
| Prop | Type | Description |
|---|---|---|
spec | Spec | null | The UI spec to render. Pass null to render nothing. |
registry | ComponentRegistry | Map of component type names to renderer functions. |
handlers | Record<string, ActionHandler> | Custom action handlers (merged with built-in actions). |
functions | Record<string, ComputedFunction> | Named functions for $computed expressions. |
navigate | (url: string) => void | Navigation callback for the navigate action. |
onConfirm | (confirm: ActionConfirm) => Promise<boolean> | Confirmation dialog handler. |
onStateChange | (state: StateModel) => void | Called whenever state changes. |
loading | boolean | Passed through to components (e.g. for skeleton states). |
Available Components
Stream UI includes wrappers for 20 Prototyper UI components:
| Category | Components |
|---|---|
| Actions | Button |
| Forms | Input, Textarea, Select, Checkbox, Switch, RadioGroup, Slider |
| Overlays | Dialog, Tooltip |
| Navigation | Tabs |
| Layout | Card, Accordion, Separator |
| Data Display | Heading, Text, Badge, Avatar, Alert |
| Feedback | Progress |
Import the full registry:
import { prototyperComponents } from "@prototyperai/stream-ui/components"Or import individual wrappers to build a custom registry:
import { buttonRenderer } from "@prototyperai/stream-ui/components"
const myRegistry = {
Button: buttonRenderer,
// ... add only what you need
}Next Steps
- Spec Format — Deep dive into the spec structure
- Expressions — Dynamic values and data binding
- Streaming — Connect to any AI model
- Actions — Handle user interactions