Skip to main content
The @output.ai/llm package provides a unified API for LLM generation with built-in prompt templating.

Installation

npm install @output.ai/llm

Quick Start

import { generateText } from '@output.ai/llm';

const result = await generateText({
  prompt: 'summarize@v1',
  variables: { text: 'Your content here' }
});

Generate Functions

generateText

Generate unstructured text:
import { generateText } from '@output.ai/llm';

const response = await generateText({
  prompt: 'explain_topic@v1',
  variables: { topic: 'machine learning' }
});

generateObject

Generate a structured object matching a Zod schema:
import { generateObject } from '@output.ai/llm';
import { z } from '@output.ai/core';

const recipeSchema = z.object({
  title: z.string(),
  ingredients: z.array(z.string()),
  steps: z.array(z.string())
});

const recipe = await generateObject({
  prompt: 'recipe@v1',
  variables: { dish: 'lasagna' },
  schema: recipeSchema
});

generateArray

Generate an array of structured items:
import { generateArray } from '@output.ai/llm';
import { z } from '@output.ai/core';

const taskSchema = z.object({
  title: z.string(),
  priority: z.number()
});

const tasks = await generateArray({
  prompt: 'task_list@v1',
  variables: { project: 'website' },
  schema: taskSchema
});

generateEnum

Generate a value from allowed options:
import { generateEnum } from '@output.ai/llm';

const category = await generateEnum({
  prompt: 'categorize@v1',
  variables: { text: 'Product announcement' },
  enum: ['marketing', 'engineering', 'sales', 'support']
});

Prompt Files

Prompt files use YAML frontmatter for configuration and LiquidJS for templating.

File Format

prompt@v1.prompt
---
provider: anthropic
model: claude-sonnet-4-20250514
temperature: 0.7
---

<system>
You are a helpful assistant.
</system>

<user>
{{ user_message }}
</user>

Configuration Options

OptionTypeDescription
providerstringanthropic, openai, or azure
modelstringModel identifier
temperaturenumberSampling temperature (0.0-1.0)
maxTokensnumberMaximum output tokens
providerOptionsobjectProvider-specific options

Variables

Use LiquidJS syntax to interpolate variables:
---
provider: anthropic
model: claude-sonnet-4-20250514
---

<system>
You are an expert in {{ domain }}.
</system>

<user>
Explain {{ topic }} in {{ style }} terms.
</user>
await generateText({
  prompt: 'explain@v1',
  variables: {
    domain: 'physics',
    topic: 'quantum entanglement',
    style: 'simple'
  }
});

Extended Thinking

Enable extended thinking for complex reasoning:
---
provider: anthropic
model: claude-sonnet-4-20250514
providerOptions:
  thinking:
    type: enabled
    budgetTokens: 10000
---

Providers

Anthropic

---
provider: anthropic
model: claude-sonnet-4-20250514
---
Requires ANTHROPIC_API_KEY environment variable.

OpenAI

---
provider: openai
model: gpt-4o
---
Requires OPENAI_API_KEY environment variable.

Azure OpenAI

---
provider: azure
model: gpt-4o
---
Requires Azure-specific environment variables.