Expand description
Timely dataflow is a framework for managing and executing data-parallel dataflow computations.
The code is organized in crates and modules that are meant to depend as little as possible on each other.
Serialization: Timely uses the bincode
crate for serialization. Performance could be improved.
Communication: The timely_communication
crate defines several primitives for
communicating between dataflow workers, and across machine boundaries.
Progress tracking: The timely::progress
module defines core dataflow structures for
tracking and reporting progress in a timely dataflow system, namely the number of outstanding
dataflow messages and un-exercised message capabilities throughout the timely dataflow graph.
It depends on timely_communication
to exchange progress messages.
Dataflow construction: The timely::dataflow
module defines an example dataflow system
using communication
and progress
to both exchange data and progress information, in support
of an actual data-parallel timely dataflow computation. It depends on timely_communication
to
move data, and timely::progress
to provide correct operator notifications.
§Examples
The following is a hello-world dataflow program.
use timely::*;
use timely::dataflow::operators::{Input, Inspect};
// construct and execute a timely dataflow
timely::execute_from_args(std::env::args(), |worker| {
// add an input and base computation off of it
let mut input = worker.dataflow(|scope| {
let (input, stream) = scope.new_input();
stream.inspect(|x| println!("hello {:?}", x));
input
});
// introduce input, advance computation
for round in 0..10 {
input.send(round);
input.advance_to(round + 1);
worker.step();
}
});
The program uses timely::execute_from_args
to spin up a computation based on command line arguments
and a closure specifying what each worker should do, in terms of a handle to a timely dataflow
Scope
(in this case, root
). A Scope
allows you to define inputs, feedback
cycles, and dataflow subgraphs, as part of building the dataflow graph of your dreams.
In this example, we define a new scope of root using scoped
, add an exogenous
input using new_input
, and add a dataflow inspect
operator to print each observed record.
We then introduce input at increasing rounds, indicate the advance to the system (promising
that we will introduce no more input at prior rounds), and step the computation.
Re-exports§
pub use execute::execute;
pub use execute::execute_directly;
pub use execute::example;
pub use execute::execute_from_args;
pub use order::PartialOrder;
pub use worker::Config as WorkerConfig;
pub use execute::Config;
Modules§
- Re-export of the
timely_bytes
crate. - Re-export of the
timely_communication
crate. - Re-export of the
timely_container
crate. - Abstractions for timely dataflow programming.
- Starts a timely dataflow execution from configuration information and per-worker logic.
- Traits, implementations, and macros related to logging timely events.
- Re-export of the
timely_logging
crate. - Traits and types for partially ordered sets.
- Progress tracking mechanisms to support notification in timely dataflow
- Types and traits to activate and schedule fibers.
- Synchronization primitives implemented in timely dataflow.
- The root of each single-threaded worker.
Structs§
- A wrapper that indicates a serialization/deserialization strategy. A wrapper that indicates
bincode
as the serialization/deserialization strategy.
Enums§
- Possible configurations for the communication infrastructure.
Traits§
- A container transferring data through dataflow edges
- A composite trait for types usable as data in timely dataflow.
- A composite trait for types usable on exchange channels in timely dataflow.