Struct mz_persist::indexed::background::Maintainer
source · [−]pub struct Maintainer<B> {
blob: Arc<BlobCache<B>>,
metrics: Arc<Metrics>,
key_val_data_max_len: Option<usize>,
}
Expand description
A runtime for background asynchronous maintenance of stored data.
Fields
blob: Arc<BlobCache<B>>
metrics: Arc<Metrics>
key_val_data_max_len: Option<usize>
Implementations
sourceimpl<B: Blob> Maintainer<B>
impl<B: Blob> Maintainer<B>
sourceimpl<B: Blob + Send + Sync + 'static> Maintainer<B>
impl<B: Blob + Send + Sync + 'static> Maintainer<B>
sourcepub async fn compact_trace(
&self,
req: CompactTraceReq
) -> Result<CompactTraceRes, Error>
pub async fn compact_trace(
&self,
req: CompactTraceReq
) -> Result<CompactTraceRes, Error>
Asynchronously runs the requested compaction on the tokio blocking work pool.
sourcefn compact_trace_blocking(
handle: &Handle,
blob: Arc<BlobCache<B>>,
metrics: Arc<Metrics>,
req: CompactTraceReq,
key_val_data_max_len: Option<usize>
) -> Result<CompactTraceRes, Error>
fn compact_trace_blocking(
handle: &Handle,
blob: Arc<BlobCache<B>>,
metrics: Arc<Metrics>,
req: CompactTraceReq,
key_val_data_max_len: Option<usize>
) -> Result<CompactTraceRes, Error>
Physically and logically compact two trace batches together
This function performs trace compaction with bounded memory usage by:
- Only ever keeping one BlobTraceBatchPart in memory from each of the two batches being compacted at a time.
- Only keeping one ColumnarRecords worth of merged data in memory at a time.
- Performing roughly a linear merge on the two trace batch parts and as we read data from each trace batch, figuring out the upper bound on data that can be compacted, consolidating and merging that, and then placing that in a ColumnarRecords to await being written out as an output BlobTraceBatchPart.
(3). is best explained with an example. If we are compacting two trace batches, A which has two parts, one with keys [0, 10) and one with keys [10, 20) and B has three parts, with keys [0, 15), [15, 30) and [30, 45) respectively then while compacting when we observe the first batches from A and B which have keys [0, 10) and [0, 15) respectively, we cannot merge all of both batches together.
We can only merge the subset of keys in [0, 10), as its possible that subsequent parts in A have relevant keys in [10, 15).
TODO(rkhaitan): We don’t do the real linear merge as compaction requires us to both forward times and consolidate multiple updates at the forwarded times. I believe that doing so is possible in linear time, but doing so in a single pass was complicated enough that it is left for future work.
sourceasync fn load_trace_batch_part(
blob: &BlobCache<B>,
key: &str,
updates: &mut Vec<((Vec<u8>, Vec<u8>), u64, i64)>
) -> Result<(), Error>
async fn load_trace_batch_part(
blob: &BlobCache<B>,
key: &str,
updates: &mut Vec<((Vec<u8>, Vec<u8>), u64, i64)>
) -> Result<(), Error>
Read the data from the trace batch part at key
into updates
.
sourcefn drain_leq_threshold(
updates: Vec<((Vec<u8>, Vec<u8>), u64, i64)>,
buffer: &mut Vec<((Vec<u8>, Vec<u8>), u64, i64)>,
threshold: &(Vec<u8>, Vec<u8>)
) -> Vec<((Vec<u8>, Vec<u8>), u64, i64)>ⓘNotable traits for Vec<u8, A>impl<A> Write for Vec<u8, A> where
A: Allocator,
fn drain_leq_threshold(
updates: Vec<((Vec<u8>, Vec<u8>), u64, i64)>,
buffer: &mut Vec<((Vec<u8>, Vec<u8>), u64, i64)>,
threshold: &(Vec<u8>, Vec<u8>)
) -> Vec<((Vec<u8>, Vec<u8>), u64, i64)>ⓘNotable traits for Vec<u8, A>impl<A> Write for Vec<u8, A> where
A: Allocator,
A: Allocator,
Drain all records from updates
with a (key, val) <= threshold
into
buffer
.
TODO: this could be replaced with a drain_filter if that wasn’t experimental.
sourceasync fn write_trace_batch_part(
blob: &BlobCache<B>,
desc: Description<u64>,
updates: ColumnarRecords,
index: u64
) -> Result<(String, u64), Error>
async fn write_trace_batch_part(
blob: &BlobCache<B>,
desc: Description<u64>,
updates: ColumnarRecords,
index: u64
) -> Result<(String, u64), Error>
Write a BlobTraceBatchPart containing updates
into Blob.
Returns the key and size in bytes for the trace batch part.
Trait Implementations
Auto Trait Implementations
impl<B> RefUnwindSafe for Maintainer<B> where
B: RefUnwindSafe,
impl<B> Send for Maintainer<B> where
B: Send + Sync,
impl<B> Sync for Maintainer<B> where
B: Send + Sync,
impl<B> Unpin for Maintainer<B>
impl<B> UnwindSafe for Maintainer<B> where
B: RefUnwindSafe,
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
sourceimpl<T> FutureExt for T
impl<T> FutureExt for T
sourcefn with_context(self, otel_cx: Context) -> WithContext<Self>
fn with_context(self, otel_cx: Context) -> WithContext<Self>
sourcefn with_current_context(self) -> WithContext<Self>
fn with_current_context(self) -> WithContext<Self>
sourceimpl<T> Instrument for T
impl<T> Instrument for T
sourcefn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
sourcefn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
sourceimpl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
sourcefn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
Wrap the input message T
in a tonic::Request
sourceimpl<T> WithSubscriber for T
impl<T> WithSubscriber for T
sourcefn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
sourcefn with_current_subscriber(self) -> WithDispatch<Self>
fn with_current_subscriber(self) -> WithDispatch<Self>
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more