Prototyper UI

Streaming

Connect Stream UI to an AI model with JSONL streaming and progressive rendering

Stream UI renders interfaces progressively as an AI model generates them. The model outputs JSONL (newline-delimited JSON), where each line is an RFC 6902 JSON Patch operation that incrementally builds the spec.

JSONL Format

Each line in the stream is a single JSON Patch operation:

{"op":"add","path":"/root","value":"card"}
{"op":"add","path":"/elements/card","value":{"type":"Card","props":{},"children":["heading"]}}
{"op":"add","path":"/elements/heading","value":{"type":"Heading","props":{"text":"Hello","level":2}}}
{"op":"add","path":"/state","value":{"count":0}}

The stream compiler processes these lines incrementally. As each valid line arrives, it applies the patch to the in-progress spec and emits an updated snapshot for React to render.

Patch Operations

Stream UI supports all RFC 6902 operations:

OperationRequired FieldsDescription
addpath, valueInsert a new value at the path. Creates intermediate objects as needed.
replacepath, valueOverwrite the existing value at the path.
removepathDelete the value at the path.
movepath, fromRemove the value at from and add it at path.
copypath, fromCopy the value at from to path.
testpath, valueAssert the value at path equals value. Throws on mismatch.

Common Patterns

Build the initial structure:

{"op":"add","path":"/root","value":"main"}
{"op":"add","path":"/elements/main","value":{"type":"Card","props":{},"children":[]}}
{"op":"add","path":"/state","value":{}}

Add elements incrementally:

{"op":"add","path":"/elements/title","value":{"type":"Heading","props":{"text":"Dashboard","level":1}}}
{"op":"add","path":"/elements/main/children/-","value":"title"}

The /- path suffix appends to an array (RFC 6902 array append).

Update a prop on an existing element:

{"op":"replace","path":"/elements/title/props/text","value":"Updated Title"}

Remove an element:

{"op":"remove","path":"/elements/old-widget"}

Progressive Streaming Pattern

A typical stream follows this order:

Root and scaffold

The model outputs the root key and top-level container elements first. The UI shows the basic structure immediately.

{"op":"add","path":"/root","value":"page"}
{"op":"add","path":"/elements/page","value":{"type":"Card","props":{},"children":[]}}

Elements

Individual elements are added one by one. Each add to an element, followed by appending its key to its parent's children array, makes it visible immediately.

{"op":"add","path":"/elements/heading","value":{"type":"Heading","props":{"text":"Welcome","level":2}}}
{"op":"add","path":"/elements/page/children/-","value":"heading"}
{"op":"add","path":"/elements/intro","value":{"type":"Text","props":{"content":"Hello world"}}}
{"op":"add","path":"/elements/page/children/-","value":"intro"}

State

State is typically added last (or alongside the elements that use it), since elements can render without state — expressions just resolve to undefined until state arrives.

{"op":"add","path":"/state/user","value":{"name":"Alice","email":"alice@example.com"}}
{"op":"add","path":"/state/preferences","value":{"theme":"dark"}}

useUIStream Hook

The useUIStream hook manages the entire streaming lifecycle: connecting to an endpoint, feeding chunks to the stream compiler, and exposing the latest spec for rendering.

"use client"

import { Renderer, useUIStream } from "@prototyperai/stream-ui"
import { prototyperComponents } from "@prototyperai/stream-ui/components"

export function StreamingUI() {
  const { spec, isStreaming, error, send, clear } = useUIStream({
    url: "/api/stream-ui",
  })

  return (
    <div>
      <div>
        <button onClick={() => send({ prompt: "Build a settings page" })}>
          Generate
        </button>
        <button onClick={clear} disabled={!spec}>
          Clear
        </button>
      </div>

      {error && <p>Error: {error.message}</p>}

      {spec && (
        <Renderer
          spec={spec}
          registry={prototyperComponents}
          loading={isStreaming}
        />
      )}
    </div>
  )
}

Options

OptionTypeDefaultDescription
urlstringRequiredEndpoint URL that returns a JSONL stream.
method"GET" | "POST""POST"HTTP method for the request.
headersRecord<string, string>{}Additional request headers. Content-Type: application/json is always included.
bodyunknownundefinedDefault request body (overridden by send(body)).
autoStartbooleanfalseIf true, starts streaming immediately on mount.

Return Value

FieldTypeDescription
specSpec | nullThe latest compiled spec, or null before any data arrives.
isStreamingbooleanWhether a stream is currently in progress.
errorError | nullThe last error, or null. Cleared on each new send().
send(body?: unknown) => voidStart a new stream. Aborts any in-progress stream. The optional body parameter overrides the default body option.
clear() => voidAbort any in-progress stream, reset the spec to null, and clear errors.

createStreamCompiler for Custom Integrations

If you need lower-level control (e.g. integrating with a WebSocket, Server-Sent Events, or a custom transport), use createStreamCompiler directly:

import { createStreamCompiler } from "@prototyperai/stream-ui/core"

const compiler = createStreamCompiler()

// Push arbitrary text chunks — the compiler handles partial lines
const updatedSpec = compiler.push('{"op":"add","path":"/root","value":"main"}\n')

// Get the current spec at any time
const currentSpec = compiler.getSpec()

// Reset to start fresh
compiler.reset()

StreamCompiler API

MethodReturnsDescription
push(chunk)Spec | nullFeed a text chunk. Returns updated spec if any patches applied, null otherwise.
getSpec()Spec | nullCurrent spec, or null if nothing received yet.
reset()voidClear the spec and internal line buffer.

The compiler correctly handles text chunks that split across line boundaries. It buffers incomplete lines internally and only processes complete lines (delimited by \n).

Example: Manual Fetch + ReadableStream

"use client"

import { useState } from "react"
import { createStreamCompiler } from "@prototyperai/stream-ui/core"
import { Renderer } from "@prototyperai/stream-ui"
import { prototyperComponents } from "@prototyperai/stream-ui/components"
import type { Spec } from "@prototyperai/stream-ui/core"

export function ManualStreamExample() {
  const [spec, setSpec] = useState<Spec | null>(null)

  async function generate() {
    const compiler = createStreamCompiler()

    const response = await fetch("/api/stream-ui", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ prompt: "Build a pricing table" }),
    })

    const reader = response.body!
      .pipeThrough(new TextDecoderStream())
      .getReader()

    while (true) {
      const { done, value } = await reader.read()
      if (done) break

      const updated = compiler.push(value)
      if (updated) {
        setSpec(updated)
      }
    }

    // Flush any remaining buffered content
    const final = compiler.push("\n")
    if (final) setSpec(final)
  }

  return (
    <div>
      <button onClick={generate}>Generate</button>
      {spec && <Renderer spec={spec} registry={prototyperComponents} />}
    </div>
  )
}

Example: Server-Side API Route

Here is a minimal Next.js API route that streams JSONL from an AI model:

// app/api/stream-ui/route.ts
import { NextRequest } from "next/server"

export async function POST(req: NextRequest) {
  const { prompt } = await req.json()

  // Call your AI model here. This example shows the response format.
  const patches = [
    { op: "add", path: "/root", value: "card" },
    { op: "add", path: "/elements/card", value: { type: "Card", props: {}, children: ["heading", "text"] } },
    { op: "add", path: "/elements/heading", value: { type: "Heading", props: { text: "Response", level: 2 } } },
    { op: "add", path: "/elements/text", value: { type: "Text", props: { content: `You asked: ${prompt}` } } },
  ]

  const stream = new ReadableStream({
    start(controller) {
      const encoder = new TextEncoder()
      for (const patch of patches) {
        controller.enqueue(encoder.encode(JSON.stringify(patch) + "\n"))
      }
      controller.close()
    },
  })

  return new Response(stream, {
    headers: { "Content-Type": "application/x-ndjson" },
  })
}

In a real integration, you would stream patches as your AI model generates them, writing each line to the response as it becomes available.

Error Handling

The stream compiler is resilient to malformed input:

  • Empty lines are silently skipped
  • Lines that fail JSON parsing are silently skipped
  • Lines with invalid patch operations (missing op or path) are silently skipped
  • Valid patches are applied regardless of surrounding invalid lines

The useUIStream hook captures HTTP errors and stream errors in its error field. Abort errors (from calling send() while streaming) are ignored.

Next Steps

  • Spec Format — Understand what the patches are building
  • Actions — Add interactivity to streamed UIs
  • Expressions — Dynamic data binding in props

On this page