Prototyper UI

Prompt Engineering

Customize AI prompts, use structured outputs, and optimize generation quality.

The prompt system converts your catalog into precise instructions for AI models. buildSystemPrompt generates the system message describing your components, spec format, and rules. buildUserPrompt wraps user requests with context for fresh generation or refinement.

Prompt Modes

The mode option controls how the AI formats its output:

import { buildSystemPrompt } from "@prototyperai/stream-ui/catalog"

// Generate mode (default): AI outputs ONLY JSONL patches, no prose
const generatePrompt = buildSystemPrompt(catalog, { mode: "generate" })

// Chat mode: AI responds conversationally, wraps JSONL in ```spec fences
const chatPrompt = buildSystemPrompt(catalog, { mode: "chat" })

Generate mode ("generate", default) tells the model to output only valid JSONL patches — one JSON object per line, no explanations, no markdown, no code fences. Use this when piping output directly into a StreamCompiler.

Chat mode ("chat") tells the model to respond naturally and wrap its JSONL patches inside ```spec code fences. The model can include explanatory text before and after the fence. Use this when building a conversational UI where the user sees both the AI's explanation and the rendered result.

Prop

Type

Custom Rules

Append domain-specific rules to the system prompt with customRules. Each string becomes a bullet point in the RULES section the AI sees:

const prompt = buildSystemPrompt(catalog, {
  customRules: [
    "Always use the Card component as the root element",
    "Prefer dark variant for all buttons",
    "Include at least 3 sample items in any list",
  ],
})

Custom rules appear after the built-in rules (use only catalog components, always include a root, use flat keys, etc.), so they can refine or constrain the default behavior without overriding it.

Custom System Introduction

Override the default system introduction with the system option. This replaces the opening paragraph that describes the AI's role:

const prompt = buildSystemPrompt(catalog, {
  system:
    "You are a dashboard builder. Generate admin interfaces using JSON Patch format. Focus on data tables and charts.",
})

The default introduction is:

You are a UI generator that outputs JSON. You generate user interfaces by producing a flat UI spec in JSONL (streaming JSON Patch) format.

Everything after the introduction (output format, spec structure, component catalog, rules) is still generated automatically from your catalog.

Custom Prompt Templates

For full control over the system prompt, pass a template function. It receives a PromptContext with the catalog, component names, and the default prompt as a starting point:

import type { PromptContext } from "@prototyperai/stream-ui/catalog"

const prompt = buildSystemPrompt(catalog, {
  template: (ctx: PromptContext) => {
    // Start with the default prompt, then append custom sections
    return `${ctx.defaultPrompt}

## BRAND GUIDELINES
- Use blue (#2563eb) as the primary action color
- All headings must use sentence case
- Maximum 3 levels of nesting
`
  },
})

The PromptContext object exposes:

Prop

Type

You can also build a prompt from scratch using ctx.catalog and ctx.formatZodType:

const prompt = buildSystemPrompt(catalog, {
  template: (ctx) => {
    const components = ctx.componentNames
      .map((name) => {
        const def = ctx.catalog.components[name]!
        return `- ${name}: ${ctx.formatZodType(def.props)}`
      })
      .join("\n")

    return `You are a form builder. Only use these components:\n${components}\n\nOutput JSONL patches.`
  },
})

User Prompts and Refinement

buildUserPrompt wraps a user's request with context about the current spec and application state. It has two modes depending on whether you pass a currentSpec:

Fresh Generation

When no currentSpec is provided, the prompt reminds the model to stream patches progressively:

import { buildUserPrompt } from "@prototyperai/stream-ui/catalog"

const prompt = buildUserPrompt("Build a pricing page with three tiers")

Refinement Mode

When you pass an existing spec, the prompt includes the full spec as JSON and instructs the model to output only the patches needed to make the requested change — not a full regeneration:

const prompt = buildUserPrompt("Add a free trial toggle to the header", {
  currentSpec: existingSpec,
})

The model receives the current spec and instructions like:

  • To add a new element: {"op":"add","path":"/elements/new-key","value":{...}}
  • To modify an existing element: {"op":"replace","path":"/elements/existing-key","value":{...}}
  • To remove an element: {"op":"remove","path":"/elements/old-key"}

This keeps refinement responses fast and focused.

State Context

Provide application data so the model can generate data-driven UIs:

const prompt = buildUserPrompt("Show a table of these users", {
  stateContext: {
    users: [
      { id: 1, name: "Alice", role: "Admin" },
      { id: 2, name: "Bob", role: "Editor" },
    ],
    currency: "USD",
  },
})

The state context is included as a JSON block with instructions to reference it via $state expressions.

Prompt Length Limits

Truncate user input to avoid exceeding model context limits:

const prompt = buildUserPrompt(veryLongUserInput, {
  maxPromptLength: 2000, // Truncates the user's text to 2000 characters
})

UserPromptOptions

Prop

Type

Token Budgeting

The system prompt includes your entire component catalog — every component's name, props schema, description, events, and slots. For large catalogs, this can consume significant tokens.

Strategies to reduce token usage:

  1. Keep descriptions concise. Each component's description field appears verbatim in the prompt.
  2. Use example props. When provided, the prompt uses your example instead of auto-generating one from the schema.
  3. Split catalogs. Create focused sub-catalogs for different use cases instead of one catalog with everything:
const formCatalog = defineCatalog({
  components: { Input, Select, Checkbox, RadioGroup, Button },
})

const dashboardCatalog = defineCatalog({
  components: { Card, Heading, Text, Table, Chart, Badge },
})

// Use the right catalog for the task
const prompt = buildSystemPrompt(formCatalog)
  1. Use a template to trim sections. If you know the model already understands certain concepts, use a template to remove sections like the dynamic values reference or the repeat/list docs.

Approximate token sizes (varies by catalog):

  • Base prompt (format, rules, examples): ~800 tokens
  • Per component: ~30–80 tokens depending on schema complexity
  • State/expressions/actions reference: ~400 tokens

Structured Outputs vs Streaming

Stream UI supports two approaches for AI-generated UIs:

JSONL StreamingStructured Output (JSON Schema)
FormatNewline-delimited JSON Patch operationsSingle JSON object matching the full spec schema
Progressive renderingYes — UI fills in as patches arriveNo — UI renders only after full response
RefinementPatches modify existing spec incrementallyFull spec must be regenerated
Model supportAny model that outputs textModels with structured output support (OpenAI, Anthropic)
ValidationPer-line — malformed lines are skippedSchema-enforced by the model provider
Token efficiencyLower for refinements (patches only)Higher for refinements (full spec each time)
SetupbuildSystemPrompt + StreamCompilercatalog.jsonSchema() + provider's structured output API

Use JSONL streaming (the default) when you want progressive rendering, efficient refinement, and maximum model compatibility.

Use structured outputs when you need guaranteed schema compliance and don't need progressive rendering. Export the schema with:

const jsonSchema = catalog.jsonSchema()
// Pass to OpenAI's response_format, Anthropic's tool_use, etc.

API Endpoints

The docs site exposes three endpoints for external tooling:

GET /stream-ui/prompt.txt

Returns the full system prompt as plain text. Supports query parameters:

ParameterTypeDescription
mode"generate" | "chat"Prompt mode (default: "generate")
rulesstringComma-separated custom rules
# Default generate mode
curl https://prototyper-ui.com/stream-ui/prompt.txt

# Chat mode with custom rules
curl "https://prototyper-ui.com/stream-ui/prompt.txt?mode=chat&rules=Use%20dark%20theme,Max%203%20elements"

GET /stream-ui/schema.json

Returns the full JSON Schema for the spec format, derived from all catalog component and action Zod schemas:

curl https://prototyper-ui.com/stream-ui/schema.json

Use this with OpenAI's response_format: { type: "json_schema", json_schema: schema } or similar structured output APIs.

POST /stream-ui/validate

Validates a spec against the catalog. Optionally auto-fixes common issues:

curl -X POST https://prototyper-ui.com/stream-ui/validate \
  -H "Content-Type: application/json" \
  -d '{"spec": {"root": "main", "elements": {"main": {"type": "Card", "props": {}}}}, "autofix": true}'

Request body:

FieldTypeDescription
specobjectThe spec to validate
autofixbooleanWhen true and spec is invalid, return a corrected spec with a list of fixes

Response:

{
  "valid": true,
  "issues": [],
  "fixed": {
    "spec": { "..." : "..." },
    "fixes": ["Added missing children array to element 'main'"]
  }
}

The fixed field is only present when autofix: true and the spec had issues.

Best Practices

Start with the defaults. The built-in prompt covers the spec format, all dynamic expressions, repeat/list rendering, visibility conditions, events, actions, and your full component catalog. Most use cases need only buildSystemPrompt(catalog).

Use refinement mode for edits. When the user wants to modify an existing UI, always pass currentSpec to buildUserPrompt. This produces smaller, faster responses because the model outputs only the patches needed.

Provide state context for data-driven UIs. When your app has data the UI should display, pass it as stateContext so the model can reference it with $state expressions instead of inventing placeholder data.

Test prompts with the API endpoints. Fetch /stream-ui/prompt.txt to see exactly what the model receives. This is the fastest way to debug generation issues — if the prompt is wrong, the output will be wrong.

Validate after generation. Use catalog.validate(spec) or the /stream-ui/validate endpoint to catch issues in generated specs before rendering. The autofix option can correct common structural problems automatically.

Prefer chat mode for user-facing conversations. If your product shows the AI's response alongside the rendered UI, use mode: "chat" so the model can explain its changes. If the AI's response goes directly to a renderer, use mode: "generate".

Next Steps

  • Component Catalog — Define components and actions for the prompt system
  • Streaming — Connect prompts to the streaming pipeline
  • Validation — Validate generated specs against your catalog
  • API Reference — Full API reference for all exports

On this page