Coordinates client requests with the dataflow layer.
This crate hosts the “coordinator”, an object which sits at the center of the system and coordinates communication between the various components. Responsibilities of the coordinator include:
- Launching the dataflow workers.
- Periodically allowing the dataflow workers to compact existing data.
- Executing SQL queries from clients by parsing and planning them, sending the plans to the dataflow layer, and then streaming the results back to the client.
- Assigning timestamps to incoming source data.
- Persistent metadata storage for the coordinator.
- Per-connection configuration parameters and state.
- Telemetry utilities.
- Enforces critical section invariants for functions that perform writes to tables, e.g.
- Contains all of the components necessary for running webhook validation.
- A bundle of storage and compute collection identifiers.
- Configures a coordinator.
- Bundle of state related to statement execution.
- State that the coordinator must process as part of retiring command execution.
ExecuteContextExtra::Defaultis guaranteed to produce a value that will cause the coordinator to do nothing, and is intended for use by code that invokes the execution processing flow (i.e.,
sequence_plan) without actually being a statement execution.
- The response to
- Information used when determining the timestamp for a query.
- Errors that can occur in the coordinator.
- Notices that can occur in the adapter layer.
- Errors returns when running validation of a webhook request.
- The state of a cancellation request.
- The response to
- The response from a
Peek, with row multiplicities represented in unary.
- An enum describing the timeline context of a query.
- The timeline and timestamp context of a read.
- Serves the coordinator based on the provided configuration.