Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.output.ai/llms.txt

Use this file to discover all available pages before exploring further.

Render is a cloud platform that supports infrastructure-as-code via a render.yaml Blueprint. You define your services in a single file, connect your GitHub repo, and Render handles the rest. Here’s what we’ll do:

Project structure

After running output init, your repository looks like this:
your-workflows/
├── config/
│   ├── costs.yml
│   └── credentials.yml.enc    # encrypted credentials (committed)
├── src/
│   └── workflows/
│       └── your_workflow/
│           ├── workflow.ts
│           ├── steps.ts
│           └── ...
├── package.json
└── tsconfig.json
This is the default structure — it doesn’t include any deployment configuration yet. During this guide, we’ll add an ops/ directory containing a Dockerfile for the worker, and a render.yaml Blueprint at the root.

Step 1: Create the worker Dockerfile

Create the ops/ directory and add ops/render.Dockerfile for your worker:
ops/render.Dockerfile
FROM node:24.15.0-slim

ENV NODE_ENV=development

WORKDIR /app

# Required for Temporal's Rust-based client
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*

# Install dependencies (copy first to leverage Docker cache)
COPY package.json package-lock.json ./
RUN npm run output:worker:install

# Build the worker
COPY src/ ./src/
COPY config/ ./config/
COPY tsconfig.json ./
RUN npm run output:worker:build

# Clean up source files after build (config stays — credentials are needed at runtime)
RUN rm -rf ./src tsconfig.json

# Create non-root user for security
RUN groupadd -r appuser && useradd -r -g appuser appuser
RUN chown -R appuser:appuser /app
USER appuser

# Production settings
ENV NODE_ENV=production
ENV PATH="/app/node_modules/.bin:$PATH"

# V8 heap = 80% of container memory (leaves room for Temporal Rust runtime + OS)
ENV NODE_OPTIONS="--max-old-space-size-percentage=80 --heapsnapshot-signal=SIGUSR2"

CMD ["output-worker"]
The NODE_OPTIONS setting allocates 80% of container memory to the V8 heap, leaving room for the Temporal Rust runtime and OS overhead. Adjust the percentage based on your Render plan and workflow memory requirements.
Don’t commit yet — we’ll add the render.yaml Blueprint next and commit everything together.

Step 2: Create the Render Blueprint

Render’s Blueprint Spec lets you define your entire infrastructure in a single render.yaml file. When you connect your repo, Render reads this file and provisions all services automatically. Create render.yaml at the root of your repository:
render.yaml
previews:
  generation: off
services:
  # API Service - deploys from pre-built Docker image
  - type: web
    name: your-project-output-api
    runtime: image
    image:
      url: docker.io/outputai/api:0.1
    plan: pro
    region: oregon
    envVars:
      - key: TEMPORAL_ADDRESS
        value: <region>.aws.api.temporal.io:7233
      - key: TEMPORAL_NAMESPACE
        value: <your-namespace>
      - key: TEMPORAL_API_KEY
        sync: false
      # Task queue the worker will listen on. Must match the OUTPUT_CATALOG_ID used by the worker.
      - key: OUTPUT_CATALOG_ID
        value: main
      - key: OUTPUT_API_AUTH_TOKEN
        generateValue: true
    scaling:
      minInstances: 1
      maxInstances: 3
      targetCPUPercent: 60
      targetMemoryPercent: 60

  # Worker Service - builds from your GitHub repo
  - type: worker
    name: your-project-output-worker
    runtime: docker
    repo: https://github.com/your-org/your-workflows
    branch: main
    plan: pro
    region: oregon
    dockerfilePath: ./ops/render.Dockerfile
    envVars:
      # Must match the OUTPUT_CATALOG_ID used in the API
      - key: OUTPUT_CATALOG_ID
        value: main
      - key: TEMPORAL_ADDRESS
        value: <region>.aws.api.temporal.io:7233
      - key: TEMPORAL_NAMESPACE
        value: <your-namespace>
      - key: TEMPORAL_API_KEY
        sync: false
      # Master key for decrypting config/credentials/production.yml.enc
      # The only real secret needed — all other API keys live in encrypted credentials
      - key: OUTPUT_CREDENTIALS_KEY_PRODUCTION
        sync: false
      # LLM API keys — resolved from encrypted credentials at worker boot
      - key: ANTHROPIC_API_KEY
        value: credential:anthropic.api_key
      - key: OPENAI_API_KEY
        value: credential:openai.api_key
    scaling:
      minInstances: 1
      maxInstances: 10
      targetCPUPercent: 60
      targetMemoryPercent: 60

Adding Redis for remote tracing

For production debugging, you can add Redis as a Render service and reference it using fromService. This keeps everything in one Blueprint and Render handles the connection string automatically:
render.yaml
services:
  # ... API and Worker services from above ...

  # Redis for remote tracing
  - type: redis
    name: your-project-output-cache
    plan: pro
    region: oregon
    ipAllowList: []
    maxmemoryPolicy: allkeys-lru
Then add these env vars to your worker service to enable remote tracing:
    envVars:
      # ... other env vars ...
      - key: OUTPUT_REDIS_URL
        fromService:
          type: redis
          name: your-project-output-cache
          property: connectionString
      - key: OUTPUT_TRACE_REMOTE_ON
        value: true
      # Optional: persist traces to S3
      - key: OUTPUT_TRACE_REMOTE_S3_BUCKET
        value: your-traces-bucket
      - key: OUTPUT_AWS_REGION
        value: us-west-1
      - key: OUTPUT_AWS_ACCESS_KEY_ID
        sync: false
      - key: OUTPUT_AWS_SECRET_ACCESS_KEY
        sync: false
The fromService reference automatically injects the Redis connection string — no need to manage it manually. See Advanced configuration for more on remote tracing.

Blueprint key concepts

Service types:
  • type: web — The API. Render assigns it a public URL and routes HTTP traffic to it.
  • type: worker — The worker. No public URL, just runs your Dockerfile.
Secret handling with sync: false: Variables marked sync: false are secrets. Render prompts you to enter their values when you first deploy the Blueprint. They won’t be committed to your repo.
- key: TEMPORAL_API_KEY
  sync: false               # Render prompts for this during setup
Auto-generated values: Use generateValue: true for tokens that just need to be a random secure string. Render generates and stores the value for you.
- key: OUTPUT_API_AUTH_TOKEN
  generateValue: true       # Render creates a random secure value

API keys and credentials in production

Your workflows need API keys (Anthropic, OpenAI, etc.) to run. Instead of pasting each key into Render as a separate secret, Output lets you store all your keys in one encrypted file that lives in your repo. Render only needs one secret — the master key that unlocks the file. Here’s how it works:

1. Put all your API keys into encrypted credentials

If you haven’t already, initialize credentials and add your keys:
npx output credentials init --environment production
npx output credentials edit --environment production
This opens an editor. Add all the API keys your workflows need:
anthropic:
  api_key: sk-ant-...
openai:
  api_key: sk-proj-...
# Add any other keys your workflows use
geekbot:
  api_key: api_...
beehiiv:
  api_key: hNga...
Save and close. This creates two files:
  • config/credentials/production.yml.enc — the encrypted file. Commit this to your repo. It’s encrypted and safe to push.
  • config/credentials/production.key — the master key. Never commit this. It’s already in .gitignore.

2. Add the master key to Render

Copy the contents of config/credentials/production.key and add it to your worker service in Render as a secret env var:
OUTPUT_CREDENTIALS_KEY_PRODUCTION = <paste key here>
This is the only secret you need to manage in Render. Everything else comes from the encrypted file.

3. Reference keys in your Blueprint with credential:

In render.yaml, instead of setting each API key as a separate secret, pass a credential: reference as a plain string. The SDK resolves these automatically when the worker starts:
envVars:
  # These are NOT the real keys — they're references that the SDK
  # resolves at boot from the encrypted credentials file.
  - key: ANTHROPIC_API_KEY
    value: credential:anthropic.api_key
  - key: OPENAI_API_KEY
    value: credential:openai.api_key
When the worker starts, the SDK reads these env vars, sees they start with credential:, decrypts the credentials file using the master key, and replaces the values with the real API keys — before any workflow code runs.
The credential: pattern works for any env var the AI SDK or your code reads from process.env. Just add the key to your encrypted credentials and reference it in render.yaml.

Rotating a key

To change an API key in production:
npx output credentials edit --environment production
# Change the key value, save, close
git add config/credentials/production.yml.enc
git commit -m "rotate anthropic key"
git push
Render redeploys automatically. No env var changes needed — the master key stays the same. Commit the new files and push to GitHub before continuing:
git add ops/ render.yaml config/
git commit -m "Add worker Dockerfile and Render Blueprint"
git push

Step 3: Deploy

  1. Go to Render’s dashboard and click NewBlueprint
  2. Connect your GitHub repository
  3. Render detects render.yaml and shows the services it will create
  4. Enter values for any sync: false secrets when prompted
  5. Click Apply to provision all services
Render assigns a .onrender.com subdomain to web services automatically. You can add a custom domain in the service settings.

Step 4: Verify

Once both services are running, verify the API is up and the worker has registered your workflows.
  1. Check the API is healthy:
curl https://your-project-output-api.onrender.com/health
You should get a 200 OK response.
  1. Confirm the worker has connected and your workflows are registered via the catalog endpoint:
curl -H "Authorization: Basic <your-api-auth-token>" \
  https://your-project-output-api.onrender.com/workflow/catalog
You should see a JSON response listing your registered workflows. If the list is empty, the worker may still be starting up — check the worker logs in Render and try again after a moment.
  1. Run a workflow end-to-end:
curl -X POST https://your-project-output-api.onrender.com/workflow/run \
  -H "Content-Type: application/json" \
  -H "Authorization: Basic <your-api-auth-token>" \
  -d '{
    "workflowName": "your_workflow",
    "input": { "key": "value" }
  }'

Tuning worker concurrency

For high-throughput workloads, you can tune how aggressively the worker pulls and executes tasks from Temporal. Add these to your worker’s envVars:
- key: TEMPORAL_MAX_CONCURRENT_ACTIVITY_TASK_EXECUTIONS
  value: 150
- key: TEMPORAL_MAX_CONCURRENT_WORKFLOW_TASK_EXECUTIONS
  value: 600
- key: TEMPORAL_MAX_CACHED_WORKFLOWS
  value: 50
- key: TEMPORAL_MAX_CONCURRENT_ACTIVITY_TASK_POLLS
  value: 10
- key: TEMPORAL_MAX_CONCURRENT_WORKFLOW_TASK_POLLS
  value: 10
The defaults work for most workloads. Only tune these if you’re seeing task queue backlog in the Temporal UI.

Monitoring

Render dashboard

Render provides built-in metrics for:
  • CPU and memory usage per service
  • Deploy history and logs

Temporal Cloud

Monitor workflow executions in the Temporal Cloud UI:
  • Active workflows
  • Failed executions
  • Workflow history

Traces

If you’ve enabled remote tracing, traces are stored in Redis and optionally uploaded to S3.

Scaling considerations

ScenarioRecommendation
Low traffic1 worker, 1 API instance
Moderate traffic2-5 workers, 2 API instances
High traffic5-30 workers, 3+ API instances
Burst workloadsConfigure aggressive auto-scaling thresholds (40-50% CPU/memory)
Workers scale independently from the API. If workflows are queuing up in Temporal, add more workers. If API response times are slow, add more API instances.

Troubleshooting

Worker not picking up jobs

  1. Check the Temporal UI for pending workflows
  2. Verify TEMPORAL_ADDRESS, TEMPORAL_NAMESPACE, and TEMPORAL_API_KEY are correct
  3. Check worker logs in Render for connection errors

Trace files not appearing

  1. Verify OUTPUT_REDIS_URL is set and Redis is running
  2. Check OUTPUT_TRACE_REMOTE_ON=true is set
  3. For S3, verify AWS credentials have write access to the bucket

Out of memory errors

  1. Increase NODE_OPTIONS memory allocation in the Dockerfile
  2. Upgrade Render plan for more resources
  3. Check for memory leaks in workflow code (large payloads, unbounded arrays)

Credentials not found (MissingCredentialError)

  1. Verify config/credentials/production.yml.enc exists in your repo (not just config/credentials.yml.enc — the Dockerfile sets NODE_ENV=production, which makes the SDK look for the scoped path)
  2. Verify OUTPUT_CREDENTIALS_KEY_PRODUCTION is set in Render with the correct master key
  3. Verify the Dockerfile includes COPY config/ ./config/ — without it, the encrypted file never reaches the container
  4. If using credential: references for API keys (e.g. ANTHROPIC_API_KEY=credential:anthropic.api_key), make sure the encrypted credentials file actually contains that key path

Build failures

  1. Check Dockerfile path matches dockerfilePath in render.yaml
  2. Verify all dependencies are in package.json
  3. Review build logs in the Render dashboard