Remote tracing with Redis
By default, workers don’t generate trace files in production. Local tracing writes to disk, which works in development but isn’t practical when your worker runs in a container that gets replaced on every deploy. Remote tracing solves this. The worker streams trace events to Redis as workflows execute. When a run completes, the trace is assembled and optionally uploaded to S3 as a JSON file. You can then pull traces via the API or CLI to debug failures without SSH access to your worker.What you need
- A Redis instance accessible from your worker. Most hosting providers offer managed Redis — check your provider’s add-on marketplace or database options.
- An S3 bucket (optional) if you want traces persisted beyond Redis TTL. Any S3-compatible storage works.
Worker environment variables
Add these to your worker service’s environment:| Variable | Required | Description |
|---|---|---|
OUTPUT_REDIS_URL | Yes | Redis connection string. The worker uses this to buffer trace events during workflow execution. |
OUTPUT_TRACE_REMOTE_ON | Yes | Set to true to enable remote tracing. |
OUTPUT_REDIS_TRACE_TTL | No | TTL in seconds for trace data in Redis. Default: 7 days (604800). |
OUTPUT_TRACE_REMOTE_S3_BUCKET | No | S3 bucket name. When set, completed traces are uploaded as JSON files. |
OUTPUT_AWS_REGION | With S3 | AWS region for the S3 bucket. |
OUTPUT_AWS_ACCESS_KEY_ID | With S3 | AWS access key with write access to the bucket. |
OUTPUT_AWS_SECRET_ACCESS_KEY | With S3 | AWS secret key. |
The API service does not need Redis configuration. Only the worker streams trace events.
Verifying it works
- Deploy the worker with the new environment variables
- Run a workflow through the API
- Check that traces appear:
OUTPUT_REDIS_URL is reachable from the worker.