Skip to main content

How Delivery Works

Push extracted, resolved, and reviewed data to any downstream system. Delivery is a typed, at-least-once pipeline with idempotency keys on the wire, append-only history, and a dead-letter queue for terminal failures.

Every delivery flows through a five-stage pipeline:

The delivery pipeline

ParameterTypeDescription
1. SignaleventA producer emits a typed event (e.g. document.extracted, result.approved) into the outbox. Producers are stateless — they only publish.
2. BindingmatchA poller drains the outbox and matches each event against active bindings. A binding joins a signal filter to a deliverable + destination + serializer.
3. ResolverloadThe deliverable resolver loads the payload (document metadata, a record snapshot, an extraction run, ...) at delivery time using only entity IDs from the signal.
4. SerializerencodeThe serializer encodes the payload into the wire format — json, ndjson, csv, csv_file, xlsx, rows, graph, raw, md, or txt — after an optional field_map projection.
5. ConnectortransportThe connector ships the encoded bytes through the TransportWrapper (SSRF guard, payload cap, rate limit, retry ladder). Slice-1 connector is webhook.

Every attempt is logged in delivery_items. Terminal failures (retry exhausted or permanent 4xx) write a delivery_dead_letter row, which is replayable. The outbox, history, DLQ, and catalog are all accessible via the `/v1/delivery/*` API.