Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.output.ai/llms.txt

Use this file to discover all available pages before exploring further.

This section covers deploying Output workflows to a production environment. Whether you’re running a single worker or scaling across multiple instances, the deployment architecture stays the same.

What you’re deploying

An Output deployment consists of two core services, with an optional third for remote tracing:
ServiceRequiredPurpose
APIYesHTTP endpoint for triggering workflows, listing available workflows, and retrieving results. Deployed from a pre-built Docker image — no custom code needed.
WorkerYesRuns your workflows. Contains your workflow code, connects to the Temporal backend, and executes workflow steps.
RedisNoEnables remote trace file generation and S3 upload for debugging production runs. See Tracing for details.
The API is a lightweight HTTP server that accepts workflow execution requests, routes them to the Temporal backend, and returns results. It ships as a pre-built Docker image so there’s no custom code to maintain. The Worker is the core of your deployment — it contains your workflow code and connects to the Temporal backend for orchestration. Workers scale horizontally; you can add more instances to handle increased load without changing your workflow code.
These guides assume you’re using Temporal Cloud for workflow orchestration. If you need help setting up Temporal Cloud, see the Temporal Cloud documentation.

Prerequisites

Before deploying to any platform, ensure you have:
  • A Temporal backend with a dedicated namespace (e.g. Temporal Cloud)
  • Your workflow repository on GitHub
  • API keys for any services your workflows use (Anthropic, OpenAI, etc.)

Platform guides

Choose your deployment platform:

Railway

Railway. Simple Docker-based deployments with automatic scaling.

Render

Render. Infrastructure-as-code deployments with a single render.yaml Blueprint.

Advanced

Advanced. Remote tracing with Redis and S3 for production debugging.

Next steps

Tracing

Tracing. Configure production tracing and S3 storage.

Error Handling

Error Handling. Handle failures gracefully in production.