How Delivery Works
Push extracted, resolved, and reviewed data to any downstream system. Delivery is a typed, at-least-once pipeline with idempotency keys on the wire, append-only history, and a dead-letter queue for terminal failures.
Every delivery flows through a five-stage pipeline:
The delivery pipeline
| Parameter | Type | Description |
|---|---|---|
| 1. Signal | event | A producer emits a typed event (e.g. document.extracted, result.approved) into the outbox. Producers are stateless — they only publish. |
| 2. Binding | match | A poller drains the outbox and matches each event against active bindings. A binding joins a signal filter to a deliverable + destination + serializer. |
| 3. Resolver | load | The deliverable resolver loads the payload (document metadata, a record snapshot, an extraction run, ...) at delivery time using only entity IDs from the signal. |
| 4. Serializer | encode | The serializer encodes the payload into the wire format — json, ndjson, csv, csv_file, xlsx, rows, graph, raw, md, or txt — after an optional field_map projection. |
| 5. Connector | transport | The connector ships the encoded bytes through the TransportWrapper (SSRF guard, payload cap, rate limit, retry ladder). Slice-1 connector is webhook. |
Every attempt is logged in delivery_items. Terminal failures (retry exhausted or permanent 4xx) write a delivery_dead_letter row, which is replayable. The outbox, history, DLQ, and catalog are all accessible via the `/v1/delivery/*` API.