pub type Stream<S, D> = StreamCore<S, Vec<D>>;
Expand description
A stream batching data in vectors.
Aliased Type§
struct Stream<S, D> { /* private fields */ }
Trait Implementations§
source§impl<S: Scope, K: ExchangeData + Hash + Eq, V: ExchangeData> Aggregate<S, K, V> for Stream<S, (K, V)>
impl<S: Scope, K: ExchangeData + Hash + Eq, V: ExchangeData> Aggregate<S, K, V> for Stream<S, (K, V)>
source§impl<S: Scope, D: Data> Branch<S, D> for Stream<S, D>
impl<S: Scope, D: Data> Branch<S, D> for Stream<S, D>
source§fn branch(
&self,
condition: impl Fn(&S::Timestamp, &D) -> bool + 'static,
) -> (Stream<S, D>, Stream<S, D>)
fn branch( &self, condition: impl Fn(&S::Timestamp, &D) -> bool + 'static, ) -> (Stream<S, D>, Stream<S, D>)
Takes one input stream and splits it into two output streams.
For each record, the supplied closure is called with a reference to
the data and its time. If it returns true, the record will be sent
to the second returned stream, otherwise it will be sent to the first. Read more
source§impl<G: Scope, D: Data> Delay<G, D> for Stream<G, D>
impl<G: Scope, D: Data> Delay<G, D> for Stream<G, D>
source§fn delay<L: FnMut(&D, &G::Timestamp) -> G::Timestamp + 'static>(
&self,
func: L,
) -> Self
fn delay<L: FnMut(&D, &G::Timestamp) -> G::Timestamp + 'static>( &self, func: L, ) -> Self
Advances the timestamp of records using a supplied function. Read more
source§fn delay_total<L: FnMut(&D, &G::Timestamp) -> G::Timestamp + 'static>(
&self,
func: L,
) -> Selfwhere
G::Timestamp: TotalOrder,
fn delay_total<L: FnMut(&D, &G::Timestamp) -> G::Timestamp + 'static>(
&self,
func: L,
) -> Selfwhere
G::Timestamp: TotalOrder,
Advances the timestamp of records using a supplied function. Read more
source§impl<S: Scope, D: Data> Map<S, D> for Stream<S, D>
impl<S: Scope, D: Data> Map<S, D> for Stream<S, D>
source§fn map_in_place<L: FnMut(&mut D) + 'static>(&self, logic: L) -> Stream<S, D>
fn map_in_place<L: FnMut(&mut D) + 'static>(&self, logic: L) -> Stream<S, D>
Updates each element of the stream and yields the element, re-using memory where possible. Read more