Struct mz_persist_client::internal::trace::FlatTrace
source · pub struct FlatTrace<T> {
pub(crate) since: Antichain<T>,
pub(crate) legacy_batches: BTreeMap<Arc<HollowBatch<T>>, ()>,
pub(crate) hollow_batches: BTreeMap<SpineId, Arc<HollowBatch<T>>>,
pub(crate) spine_batches: BTreeMap<SpineId, ThinSpineBatch<T>>,
pub(crate) merges: BTreeMap<SpineId, ThinMerge<T>>,
}
Expand description
This is a “flattened” representation of a Trace. Goals:
- small updates to the trace should result in small differences in the
FlatTrace
; - two
FlatTrace
s should be efficient to diff; - converting to and from a
Trace
should be relatively straightforward.
These goals are all somewhat in tension, and the space of possible representations is pretty large. See individual fields for comments on some of the tradeoffs.
Fields§
§since: Antichain<T>
§legacy_batches: BTreeMap<Arc<HollowBatch<T>>, ()>
Hollow batches without an associated ID. If this flattened trace contains spine batches,
we can figure out which legacy batch belongs in which spine batch by comparing the desc
s.
Previously, we serialized a trace as just this list of batches. Keeping this data around
helps ensure backwards compatibility. In the near future, we may still keep some batches
here to help minimize the size of diffs – rewriting all the hollow batches in a shard
can be prohibitively expensive. Eventually, we’d like to remove this in favour of the
collection below.
hollow_batches: BTreeMap<SpineId, Arc<HollowBatch<T>>>
Hollow batches with an associated ID. Spine batches can reference these hollow batches by id directly.
spine_batches: BTreeMap<SpineId, ThinSpineBatch<T>>
Spine batches stored by ID. We reference hollow batches by ID, instead of inlining them, to make differential updates smaller when two batches merge together. We also store the level on the batch, instead of mapping from level to a list of batches… the level of a spine batch doesn’t change over time, but the list of batches at a particular level does.
merges: BTreeMap<SpineId, ThinMerge<T>>
In-progress merges. We store this by spine id instead of level to prepare for some possible generalizations to spine (merging N of M batches at a level). This is also a natural place to store incremental merge progress in the future.
Trait Implementations§
source§impl<T: Timestamp + Codec64> RustType<ProtoTrace> for FlatTrace<T>
impl<T: Timestamp + Codec64> RustType<ProtoTrace> for FlatTrace<T>
source§fn into_proto(&self) -> ProtoTrace
fn into_proto(&self) -> ProtoTrace
Self
into a Proto
value.source§fn from_proto(proto: ProtoTrace) -> Result<Self, TryFromProtoError>
fn from_proto(proto: ProtoTrace) -> Result<Self, TryFromProtoError>
source§fn into_proto_owned(self) -> Proto
fn into_proto_owned(self) -> Proto
Self::into_proto
that types can
optionally implement, otherwise, the default implementation
delegates to Self::into_proto
.Auto Trait Implementations§
impl<T> Freeze for FlatTrace<T>where
T: Freeze,
impl<T> RefUnwindSafe for FlatTrace<T>where
T: RefUnwindSafe,
impl<T> Send for FlatTrace<T>
impl<T> Sync for FlatTrace<T>
impl<T> Unpin for FlatTrace<T>where
T: Unpin,
impl<T> UnwindSafe for FlatTrace<T>where
T: RefUnwindSafe + UnwindSafe,
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
source§default unsafe fn clone_to_uninit(&self, dst: *mut T)
default unsafe fn clone_to_uninit(&self, dst: *mut T)
clone_to_uninit
)source§impl<T> FutureExt for T
impl<T> FutureExt for T
source§fn with_context(self, otel_cx: Context) -> WithContext<Self>
fn with_context(self, otel_cx: Context) -> WithContext<Self>
source§fn with_current_context(self) -> WithContext<Self>
fn with_current_context(self) -> WithContext<Self>
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request
source§impl<T> Pointable for T
impl<T> Pointable for T
source§impl<T> ProgressEventTimestamp for T
impl<T> ProgressEventTimestamp for T
source§impl<P, R> ProtoType<R> for Pwhere
R: RustType<P>,
impl<P, R> ProtoType<R> for Pwhere
R: RustType<P>,
source§fn into_rust(self) -> Result<R, TryFromProtoError>
fn into_rust(self) -> Result<R, TryFromProtoError>
RustType::from_proto
.source§fn from_rust(rust: &R) -> P
fn from_rust(rust: &R) -> P
RustType::into_proto
.source§impl<'a, S, T> Semigroup<&'a S> for Twhere
T: Semigroup<S>,
impl<'a, S, T> Semigroup<&'a S> for Twhere
T: Semigroup<S>,
source§fn plus_equals(&mut self, rhs: &&'a S)
fn plus_equals(&mut self, rhs: &&'a S)
std::ops::AddAssign
, for types that do not implement AddAssign
.