Render is a cloud platform that supports infrastructure-as-code via a render.yaml Blueprint. You define your services in a single file, connect your GitHub repo, and Render handles the rest.
Here’s what we’ll do:
Project structure
After running output init, your repository looks like this:
your-workflows/
├── config/
│ └── costs.yml
├── src/
│ └── workflows/
│ └── your_workflow/
│ ├── workflow.ts
│ ├── steps.ts
│ └── ...
├── package.json
└── tsconfig.json
This is the default structure — it doesn’t include any deployment configuration yet. During this guide, we’ll add an ops/ directory containing a Dockerfile for the worker, and a render.yaml Blueprint at the root.
Step 1: Create the worker Dockerfile
Create the ops/ directory and add ops/render.Dockerfile for your worker:
FROM node:24.13.0-slim
ENV NODE_ENV=development
WORKDIR /app
# Required for Temporal's Rust-based client
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
# Install dependencies (copy first to leverage Docker cache)
COPY package.json package-lock.json ./
RUN npm run output:worker:install
# Build the worker
COPY src/ ./src/
COPY tsconfig.json ./
RUN npm run output:worker:build
# Clean up source files after build
RUN rm -rf ./src tsconfig.json
# Create non-root user for security
RUN groupadd -r appuser && useradd -r -g appuser appuser
RUN chown -R appuser:appuser /app
USER appuser
# Production settings
ENV NODE_ENV=production
ENV PATH="/app/node_modules/.bin:$PATH"
# V8 heap = 80% of container memory (leaves room for Temporal Rust runtime + OS)
ENV NODE_OPTIONS="--max-old-space-size-percentage=80 --heapsnapshot-signal=SIGUSR2"
CMD ["output-worker"]
The NODE_OPTIONS setting allocates 80% of container memory to the V8 heap, leaving room for the Temporal Rust runtime and OS overhead. Adjust the percentage based on your Render plan and workflow memory requirements.
Don’t commit yet — we’ll add the render.yaml Blueprint next and commit everything together.
Step 2: Create the Render Blueprint
Render’s Blueprint Spec lets you define your entire infrastructure in a single render.yaml file. When you connect your repo, Render reads this file and provisions all services automatically.
Create render.yaml at the root of your repository:
previews:
generation: off
services:
# API Service - deploys from pre-built Docker image
- type: web
name: your-project-output-api
runtime: image
image:
url: docker.io/outputai/api:0.1
plan: pro
region: oregon
envVars:
- key: TEMPORAL_ADDRESS
value: <region>.aws.api.temporal.io:7233
- key: TEMPORAL_NAMESPACE
value: <your-namespace>
- key: TEMPORAL_API_KEY
sync: false
# Task queue the worker will listen on. Must match the OUTPUT_CATALOG_ID used by the worker.
- key: OUTPUT_CATALOG_ID
value: main
- key: OUTPUT_API_AUTH_TOKEN
generateValue: true
scaling:
minInstances: 1
maxInstances: 3
targetCPUPercent: 60
targetMemoryPercent: 60
# Worker Service - builds from your GitHub repo
- type: worker
name: your-project-output-worker
runtime: docker
repo: https://github.com/your-org/your-workflows
branch: main
plan: pro
region: oregon
dockerfilePath: ./ops/render.Dockerfile
envVars:
# Must match the OUTPUT_CATALOG_ID used in the API
- key: OUTPUT_CATALOG_ID
value: main
- key: TEMPORAL_ADDRESS
value: <region>.aws.api.temporal.io:7233
- key: TEMPORAL_NAMESPACE
value: <your-namespace>
- key: TEMPORAL_API_KEY
sync: false
# LLM API keys
- key: ANTHROPIC_API_KEY
sync: false
- key: OPENAI_API_KEY
sync: false
scaling:
minInstances: 1
maxInstances: 10
targetCPUPercent: 60
targetMemoryPercent: 60
Adding Redis for remote tracing
For production debugging, you can add Redis as a Render service and reference it using fromService. This keeps everything in one Blueprint and Render handles the connection string automatically:
services:
# ... API and Worker services from above ...
# Redis for remote tracing
- type: redis
name: your-project-output-cache
plan: pro
region: oregon
ipAllowList: []
maxmemoryPolicy: allkeys-lru
Then add these env vars to your worker service to enable remote tracing:
envVars:
# ... other env vars ...
- key: OUTPUT_REDIS_URL
fromService:
type: redis
name: your-project-output-cache
property: connectionString
- key: OUTPUT_TRACE_REMOTE_ON
value: true
# Optional: persist traces to S3
- key: OUTPUT_TRACE_REMOTE_S3_BUCKET
value: your-traces-bucket
- key: OUTPUT_AWS_REGION
value: us-west-1
- key: OUTPUT_AWS_ACCESS_KEY_ID
sync: false
- key: OUTPUT_AWS_SECRET_ACCESS_KEY
sync: false
The fromService reference automatically injects the Redis connection string — no need to manage it manually. See Advanced configuration for more on remote tracing.
Blueprint key concepts
Service types:
type: web — The API. Render assigns it a public URL and routes HTTP traffic to it.
type: worker — The worker. No public URL, just runs your Dockerfile.
Secret handling with sync: false:
Variables marked sync: false are secrets. Render prompts you to enter their values when you first deploy the Blueprint. They won’t be committed to your repo.
- key: TEMPORAL_API_KEY
sync: false # Render prompts for this during setup
Auto-generated values:
Use generateValue: true for tokens that just need to be a random secure string. Render generates and stores the value for you.
- key: OUTPUT_API_AUTH_TOKEN
generateValue: true # Render creates a random secure value
Commit the new files and push to GitHub before continuing:
git add ops/ render.yaml
git commit -m "Add worker Dockerfile and Render Blueprint"
git push
Step 3: Deploy
- Go to Render’s dashboard and click New → Blueprint
- Connect your GitHub repository
- Render detects
render.yaml and shows the services it will create
- Enter values for any
sync: false secrets when prompted
- Click Apply to provision all services
Render assigns a .onrender.com subdomain to web services automatically. You can add a custom domain in the service settings.
Step 4: Verify
Once both services are running, verify the API is up and the worker has registered your workflows.
- Check the API is healthy:
curl https://your-project-output-api.onrender.com/health
You should get a 200 OK response.
- Confirm the worker has connected and your workflows are registered via the catalog endpoint:
curl -H "Authorization: Basic <your-api-auth-token>" \
https://your-project-output-api.onrender.com/workflow/catalog
You should see a JSON response listing your registered workflows. If the list is empty, the worker may still be starting up — check the worker logs in Render and try again after a moment.
- Run a workflow end-to-end:
curl -X POST https://your-project-output-api.onrender.com/workflow/run \
-H "Content-Type: application/json" \
-H "Authorization: Basic <your-api-auth-token>" \
-d '{
"workflowName": "your_workflow",
"input": { "key": "value" }
}'
Tuning worker concurrency
For high-throughput workloads, you can tune how aggressively the worker pulls and executes tasks from Temporal. Add these to your worker’s envVars:
- key: TEMPORAL_MAX_CONCURRENT_ACTIVITY_TASK_EXECUTIONS
value: 150
- key: TEMPORAL_MAX_CONCURRENT_WORKFLOW_TASK_EXECUTIONS
value: 600
- key: TEMPORAL_MAX_CACHED_WORKFLOWS
value: 50
- key: TEMPORAL_MAX_CONCURRENT_ACTIVITY_TASK_POLLS
value: 10
- key: TEMPORAL_MAX_CONCURRENT_WORKFLOW_TASK_POLLS
value: 10
The defaults work for most workloads. Only tune these if you’re seeing task queue backlog in the Temporal UI.
Monitoring
Render dashboard
Render provides built-in metrics for:
- CPU and memory usage per service
- Deploy history and logs
Temporal Cloud
Monitor workflow executions in the Temporal Cloud UI:
- Active workflows
- Failed executions
- Workflow history
Traces
If you’ve enabled remote tracing, traces are stored in Redis and optionally uploaded to S3.
Scaling considerations
| Scenario | Recommendation |
|---|
| Low traffic | 1 worker, 1 API instance |
| Moderate traffic | 2-5 workers, 2 API instances |
| High traffic | 5-30 workers, 3+ API instances |
| Burst workloads | Configure aggressive auto-scaling thresholds (40-50% CPU/memory) |
Workers scale independently from the API. If workflows are queuing up in Temporal, add more workers. If API response times are slow, add more API instances.
Troubleshooting
Worker not picking up jobs
- Check the Temporal UI for pending workflows
- Verify
TEMPORAL_ADDRESS, TEMPORAL_NAMESPACE, and TEMPORAL_API_KEY are correct
- Check worker logs in Render for connection errors
Trace files not appearing
- Verify
OUTPUT_REDIS_URL is set and Redis is running
- Check
OUTPUT_TRACE_REMOTE_ON=true is set
- For S3, verify AWS credentials have write access to the bucket
Out of memory errors
- Increase
NODE_OPTIONS memory allocation in the Dockerfile
- Upgrade Render plan for more resources
- Check for memory leaks in workflow code (large payloads, unbounded arrays)
Build failures
- Check Dockerfile path matches
dockerfilePath in render.yaml
- Verify all dependencies are in
package.json
- Review build logs in the Render dashboard