- Set up Railway project — Create the project and connect your GitHub repo
- Deploy the Output API — Get the API running and verify it connects to Temporal
- Create the worker Dockerfile — Add the
ops/directory and Dockerfile to your repo - Deploy the worker — Configure and deploy the worker service
- Verify — Confirm everything is connected end-to-end
Project structure
After runningoutput init, your repository looks like this:
ops/ directory containing a Dockerfile for the worker.
Step 1: Set up Railway project
- Go to Railway’s dashboard and click New Project
- Select Deploy from GitHub repo and connect your repository
- Railway will detect your project — don’t deploy yet, we need to configure services first
Step 2: Deploy the Output API
The Output API is published as a Docker image. Deploy it first so you can verify the connection to Temporal before adding the worker.- Click + New → Docker Image
- Enter:
docker.io/outputai/api:0.1 - Configure variables:
- Go to Settings → Networking and generate a Railway domain (or add your custom domain)
- Configure scaling:
| Setting | Recommended Value |
|---|---|
| Min Instances | 1 |
| Max Instances | 3 (adjust based on load) |
| Target CPU | 60% |
| Target Memory | 60% |
- Deploy the service and verify it’s running:
200 OK response.
Step 3: Create the worker Dockerfile
With the API running, we’ll prepare the worker. First, add theops/ directory and a Dockerfile to your repository.
Create ops/Dockerfile:
ops/Dockerfile
ops/ directory and push to GitHub before continuing:
Step 4: Configure and deploy the worker
- Click on the service Railway created from your repo (or create a new one pointing to your repo)
- Go to Settings and configure:
| Setting | Value |
|---|---|
| Root Directory | / (or your monorepo path) |
| Dockerfile Path | ops/Dockerfile |
| Watch Paths | src/** (optional, for filtered deployments) |
- Go to Variables and add:
- Configure Scaling (in Settings → Deploy):
| Setting | Recommended Value |
|---|---|
| Min Instances | 1 |
| Max Instances | 10 (adjust based on load) |
| Target CPU | 60% |
| Target Memory | 60% |
- Deploy the worker service
Step 5: Verify
Now that the worker is deployed, verify it has connected to the API and your workflows are registered.- Check that the API can see your workflows via the catalog endpoint:
- Run a workflow end-to-end:
Config as code (optional)
Railway supports configuration in arailway.json file. This is useful for reproducible deployments:
railway.json
Railway’s config-as-code only covers build/deploy settings for a single service, not full project scaffolding. Environment variables must still be set in the dashboard.
Environment-specific configuration
Use Railway’s environments for staging vs production:railway.json
Tuning worker concurrency
For high-throughput workloads, you can tune how aggressively the worker pulls and executes tasks from Temporal. Add these environment variables to the worker service:Monitoring
Railway dashboard
Railway provides built-in metrics for:- CPU and memory usage
- Network traffic
- Deployment history and logs
Temporal Cloud
Monitor workflow executions in the Temporal Cloud UI:- Active workflows
- Failed executions
- Workflow history
Traces
If you’ve enabled remote tracing, traces are stored in Redis and optionally uploaded to S3.Scaling considerations
| Scenario | Recommendation |
|---|---|
| Low traffic | 1 worker, 1 API instance |
| Moderate traffic | 2-5 workers, 2 API instances |
| High traffic | 5-30 workers, 3+ API instances |
| Burst workloads | Configure aggressive auto-scaling thresholds (40-50% CPU) |
Troubleshooting
Worker not picking up jobs
- Check the Temporal UI for pending workflows
- Verify
TEMPORAL_ADDRESS,TEMPORAL_NAMESPACE, andTEMPORAL_API_KEYare correct - Check worker logs in Railway for connection errors
Trace files not appearing
- Verify
OUTPUT_REDIS_URLis set and Redis is running - Check
OUTPUT_TRACE_REMOTE_ON=trueis set - For S3, verify AWS credentials have write access to the bucket
Out of memory errors
- Increase
NODE_OPTIONSmemory allocation - Upgrade Railway plan for more resources
- Check for memory leaks in workflow code (large payloads, unbounded arrays)
Build failures
- Check Dockerfile path is correct
- Verify all dependencies are in
package.json - Review build logs for specific errors