← All Modules

workflow

Durable workflow engine + Lua client. The engine runs in assay serve; any assay Lua app becomes a worker via require("assay.workflow"). Workflow code runs deterministically and replays from a persisted event log, so worker crashes don't lose work and side effects don't duplicate.

Three pieces, one binary:

The engine and clients communicate over HTTP — any language with an HTTP client can implement a worker, not just Lua.

Engine — assay serve

Start the workflow server.

Authentication modes (mutually exclusive — pick one):

SQLite is single-instance only — the engine takes an engine_lock row and refuses to start if another instance holds it. For multi-instance deployment (Kubernetes, Docker Swarm), use Postgres; the cron scheduler picks a leader via pg_advisory_lock so only one engine fires.

The engine serves:

ASSAY_WF_DISPATCH_TIMEOUT_SECS env var (default 30) controls how long a worker can be silent before its dispatch lease is forcibly released — see "crash safety" below.

CLI — assay workflow / assay schedule

Talk to a running engine. Reads ASSAY_ENGINE_URL (default http://localhost:8080).

Lua client — require("assay.workflow")

Register the assay process as a worker that runs both workflow handlers (orchestration) and activity handlers (concrete work) for a queue.

Client-side inspection / control (no listen required):

Workflow handler context (ctx)

Inside workflow.define(name, function(ctx, input) ... end):

Inside workflow.activity(name, function(ctx, input) ... end):

Crash safety

Workflow code is deterministic by replay. Each ctx: call gets a per-execution sequence number and the engine persists ActivityScheduled/Completed/Failed, TimerScheduled/Fired, SignalReceived, SideEffectRecorded, ChildWorkflowStarted/Completed/Failed, WorkflowAwaitingSignal, WorkflowCancelRequested events. When a worker is asked to run a workflow task it receives the full event history; ctx: calls short-circuit to cached values for everything that's already in history, so the workflow always reaches the same state and the only thing that re-runs is the next unfulfilled step.

Specific crash modes:

ctx:side_effect is the escape hatch for any operation that would produce different values across replays (current time, random IDs, external HTTP). The result is recorded once on first execution and returned from cache thereafter, even after a worker crash.

Example — sequential activities + signal

local workflow = require("assay.workflow")
workflow.connect("http://assay.example.com", { token = env.get("ASSAY_TOKEN") })

workflow.define("ApproveAndDeploy", function(ctx, input)
  local artifact = ctx:execute_activity("build", { ref = input.git_sha })
  -- pause until a human signals "approve" via the API or dashboard
  local approval = ctx:wait_for_signal("approve")
  return ctx:execute_activity("deploy", {
    image = artifact.image,
    env = input.target_env,
    approver = approval.by,
  })
end)

workflow.activity("build", function(ctx, input)
  local resp = http.post("https://ci/build", { ref = input.ref })
  if resp.status ~= 200 then error("build failed: " .. resp.status) end
  return { image = json.parse(resp.body).image }
end)

workflow.activity("deploy", function(ctx, input)
  local resp = http.post("https://k8s/apply", input)
  if resp.status ~= 200 then error("deploy failed: " .. resp.status) end
  return { url = json.parse(resp.body).url, approver = input.approver }
end)

workflow.listen({ queue = "deploys" })  -- blocks

Start a run, signal approval, see the result:

assay workflow start --type ApproveAndDeploy --id deploy-1234 \
  --input '{"git_sha":"abc123","target_env":"staging"}'

assay workflow signal deploy-1234 approve '{"by":"alice"}'

assay workflow describe deploy-1234   # status: COMPLETED, result: {url, approver}

Concepts

Dashboard

/workflow/ (or just / — redirects). Real-time monitoring, dark/light, brand-aligned with assay.rs. Views: workflows (list with status filter, drill-in to event timeline + children), schedules, workers, queues, namespaces, settings. Live updates via SSE. Cache-busted asset URLs (per-process startup stamp) so a deploy is reflected immediately.

Notes