Skip to main content
Railway is a great fit for Output deployments — it handles Docker builds automatically and scales horizontally out of the box. Here’s what we’ll do:

Project structure

After running output init, your repository looks like this:
your-workflows/
├── config/
│   └── costs.yml
├── src/
│   └── workflows/
│       └── your_workflow/
│           ├── workflow.ts
│           ├── steps.ts
│           └── ...
├── package.json
└── tsconfig.json
This is the default structure — it doesn’t include any deployment configuration yet. During this guide, we’ll add an ops/ directory containing a Dockerfile for the worker.

Step 1: Set up Railway project

  1. Go to Railway’s dashboard and click New Project
  2. Select Deploy from GitHub repo and connect your repository
  3. Railway will detect your project — don’t deploy yet, we need to configure services first

Step 2: Deploy the Output API

The Output API is published as a Docker image. Deploy it first so you can verify the connection to Temporal before adding the worker.
  1. Click + NewDocker Image
  2. Enter: docker.io/outputai/api:0.1
  3. Configure variables:
# Temporal connection
TEMPORAL_ADDRESS=<region>.aws.api.temporal.io:7233
TEMPORAL_NAMESPACE=<your-namespace>
TEMPORAL_API_KEY=<your-temporal-api-key>

# Task queue the worker will listen on. Must match the OUTPUT_CATALOG_ID used by the worker.
OUTPUT_CATALOG_ID=main

# Generate a secure token for API authentication
OUTPUT_API_AUTH_TOKEN=<generate-a-secure-token>
  1. Go to SettingsNetworking and generate a Railway domain (or add your custom domain)
  2. Configure scaling:
SettingRecommended Value
Min Instances1
Max Instances3 (adjust based on load)
Target CPU60%
Target Memory60%
  1. Deploy the service and verify it’s running:
curl https://your-api.railway.app/health
You should get a 200 OK response.
Use the same OUTPUT_API_AUTH_TOKEN value when calling the API from your application. Railway can auto-generate secure values — click the dice icon next to the variable value.

Step 3: Create the worker Dockerfile

With the API running, we’ll prepare the worker. First, add the ops/ directory and a Dockerfile to your repository. Create ops/Dockerfile:
ops/Dockerfile
FROM node:24.13.0-slim

ENV NODE_ENV=development

WORKDIR /app

# Required for Temporal's Rust-based client
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*

# Install dependencies
COPY package.json package-lock.json ./
RUN npm run output:worker:install

# Build the worker
COPY src/ ./src/
COPY tsconfig.json ./
RUN npm run output:worker:build

# Clean up source files after build
RUN rm -rf ./src tsconfig.json

# Create non-root user for security
RUN groupadd -r appuser && useradd -r -g appuser appuser
RUN chown -R appuser:appuser /app
USER appuser

# Production settings
ENV NODE_ENV=production
ENV PATH="/app/node_modules/.bin:$PATH"

# V8 heap = 80% of container memory (leaves room for Temporal Rust runtime + OS)
ENV NODE_OPTIONS="--max-old-space-size-percentage=80 --heapsnapshot-signal=SIGUSR2"

CMD ["output-worker"]
The NODE_OPTIONS setting allocates 80% of container memory to the V8 heap, leaving room for the Temporal Rust runtime and OS overhead. Adjust the percentage based on your Railway plan and workflow memory requirements.
Commit the new ops/ directory and push to GitHub before continuing:
git add ops/
git commit -m "Add worker Dockerfile for Railway deployment"
git push

Step 4: Configure and deploy the worker

  1. Click on the service Railway created from your repo (or create a new one pointing to your repo)
  2. Go to Settings and configure:
SettingValue
Root Directory/ (or your monorepo path)
Dockerfile Pathops/Dockerfile
Watch Pathssrc/** (optional, for filtered deployments)
  1. Go to Variables and add:
# Temporal connection
TEMPORAL_ADDRESS=<region>.aws.api.temporal.io:7233
TEMPORAL_NAMESPACE=<your-namespace>
TEMPORAL_API_KEY=<your-temporal-api-key>

# Task queue for the worker to listen on. Must match the OUTPUT_CATALOG_ID used in the API.
OUTPUT_CATALOG_ID=main

# LLM API keys
ANTHROPIC_API_KEY=<your-key>
OPENAI_API_KEY=<your-key>

# Add any other API keys your workflows need
Use Railway’s Raw Editor to paste multiple variables at once. For sensitive values, Railway encrypts them automatically.
  1. Configure Scaling (in Settings → Deploy):
SettingRecommended Value
Min Instances1
Max Instances10 (adjust based on load)
Target CPU60%
Target Memory60%
  1. Deploy the worker service

Step 5: Verify

Now that the worker is deployed, verify it has connected to the API and your workflows are registered.
  1. Check that the API can see your workflows via the catalog endpoint:
curl -H "Authorization: Basic <your-api-auth-token>" \
  https://your-api.railway.app/workflow/catalog
You should see a JSON response listing your registered workflows. If the list is empty, the worker may still be starting up — check the worker logs in Railway and try again after a moment.
  1. Run a workflow end-to-end:
curl -X POST https://your-api.railway.app/workflow/run \
  -H "Content-Type: application/json" \
  -H "Authorization: Basic <your-api-auth-token>" \
  -d '{
    "workflowName": "your_workflow",
    "input": { "key": "value" }
  }'

Config as code (optional)

Railway supports configuration in a railway.json file. This is useful for reproducible deployments:
railway.json
{
  "$schema": "https://railway.com/railway.schema.json",
  "build": {
    "builder": "DOCKERFILE",
    "dockerfilePath": "ops/Dockerfile"
  },
  "deploy": {
    "startCommand": "output-worker",
    "restartPolicyType": "ON_FAILURE",
    "restartPolicyMaxRetries": 3
  }
}
Railway’s config-as-code only covers build/deploy settings for a single service, not full project scaffolding. Environment variables must still be set in the dashboard.

Environment-specific configuration

Use Railway’s environments for staging vs production:
railway.json
{
  "$schema": "https://railway.com/railway.schema.json",
  "build": {
    "dockerfilePath": "ops/Dockerfile"
  },
  "deploy": {
    "restartPolicyType": "ON_FAILURE"
  },
  "environments": {
    "production": {
      "deploy": {
        "numReplicas": 3,
        "restartPolicyMaxRetries": 5
      }
    },
    "staging": {
      "deploy": {
        "numReplicas": 1,
        "restartPolicyMaxRetries": 1
      }
    }
  }
}

Tuning worker concurrency

For high-throughput workloads, you can tune how aggressively the worker pulls and executes tasks from Temporal. Add these environment variables to the worker service:
TEMPORAL_MAX_CONCURRENT_ACTIVITY_TASK_EXECUTIONS=150
TEMPORAL_MAX_CONCURRENT_WORKFLOW_TASK_EXECUTIONS=600
TEMPORAL_MAX_CACHED_WORKFLOWS=50
TEMPORAL_MAX_CONCURRENT_ACTIVITY_TASK_POLLS=10
TEMPORAL_MAX_CONCURRENT_WORKFLOW_TASK_POLLS=10
The defaults work for most workloads. Only tune these if you’re seeing task queue backlog in the Temporal UI.

Monitoring

Railway dashboard

Railway provides built-in metrics for:
  • CPU and memory usage
  • Network traffic
  • Deployment history and logs

Temporal Cloud

Monitor workflow executions in the Temporal Cloud UI:
  • Active workflows
  • Failed executions
  • Workflow history

Traces

If you’ve enabled remote tracing, traces are stored in Redis and optionally uploaded to S3.

Scaling considerations

ScenarioRecommendation
Low traffic1 worker, 1 API instance
Moderate traffic2-5 workers, 2 API instances
High traffic5-30 workers, 3+ API instances
Burst workloadsConfigure aggressive auto-scaling thresholds (40-50% CPU)
Workers scale independently from the API. If workflows are queuing up in Temporal, add more workers. If API response times are slow, add more API instances.

Troubleshooting

Worker not picking up jobs

  1. Check the Temporal UI for pending workflows
  2. Verify TEMPORAL_ADDRESS, TEMPORAL_NAMESPACE, and TEMPORAL_API_KEY are correct
  3. Check worker logs in Railway for connection errors

Trace files not appearing

  1. Verify OUTPUT_REDIS_URL is set and Redis is running
  2. Check OUTPUT_TRACE_REMOTE_ON=true is set
  3. For S3, verify AWS credentials have write access to the bucket

Out of memory errors

  1. Increase NODE_OPTIONS memory allocation
  2. Upgrade Railway plan for more resources
  3. Check for memory leaks in workflow code (large payloads, unbounded arrays)

Build failures

  1. Check Dockerfile path is correct
  2. Verify all dependencies are in package.json
  3. Review build logs for specific errors