This section covers deploying Output workflows to a production environment. Whether you’re running a single worker or scaling across multiple instances, the deployment architecture stays the same.Documentation Index
Fetch the complete documentation index at: https://docs.output.ai/llms.txt
Use this file to discover all available pages before exploring further.
What you’re deploying
An Output deployment consists of two core services, with an optional third for remote tracing:| Service | Required | Purpose |
|---|---|---|
| API | Yes | HTTP endpoint for triggering workflows, listing available workflows, and retrieving results. Deployed from a pre-built Docker image — no custom code needed. |
| Worker | Yes | Runs your workflows. Contains your workflow code, connects to the Temporal backend, and executes workflow steps. |
| Redis | No | Enables remote trace file generation and S3 upload for debugging production runs. See Tracing for details. |
These guides assume you’re using Temporal Cloud for workflow orchestration. If you need help setting up Temporal Cloud, see the Temporal Cloud documentation.
Prerequisites
Before deploying to any platform, ensure you have:- A Temporal backend with a dedicated namespace (e.g. Temporal Cloud)
- Your workflow repository on GitHub
- API keys for any services your workflows use (Anthropic, OpenAI, etc.)
Platform guides
Choose your deployment platform:Railway
Railway. Simple Docker-based deployments with automatic scaling.
Render
Render. Infrastructure-as-code deployments with a single render.yaml Blueprint.
Advanced
Advanced. Remote tracing with Redis and S3 for production debugging.
Next steps
Tracing
Tracing. Configure production tracing and S3 storage.
Error Handling
Error Handling. Handle failures gracefully in production.