Function mz_compute::sink::kafka::encode_stream
source · [−]fn encode_stream<G>(
input_stream: &Stream<G, ((Option<Row>, Option<Row>), Timestamp, Diff)>,
as_of: SinkAsOf,
shared_gate_ts: Rc<Cell<Option<Timestamp>>>,
encoder: impl Encode + 'static,
fuel: usize,
name_prefix: String
) -> Stream<G, ((Option<Vec<u8>>, Option<Vec<u8>>), Timestamp, Diff)> where
G: Scope<Timestamp = Timestamp>,
Expand description
Encodes a stream of (Option<Row>, Option<Row>)
updates using the specified encoder.
This operator will only encode fuel
number of updates per invocation. If necessary, it will
stash updates and use an timely::scheduling::Activator
to re-schedule future invocations.
Input Row
updates must me compatible with the given implementor of Encode
.
Updates that are not beyond the given SinkAsOf
and/or the gate_ts
will be discarded
without encoding them.
Input updates do not have to be partitioned and/or sorted. This operator will not exchange data. Updates with lower timestamps will be processed before updates with higher timestamps if they arrive in order. However, this is not a guarantee, as this operator does not wait for the frontier to signal completeness. It is an optimization for downstream operators that behave suboptimal when receiving updates that are too far in the future with respect to the current frontier. The order of updates that arrive at the same timestamp will not be changed.