pub struct Maintainer<B> {
    blob: Arc<BlobCache<B>>,
    metrics: Arc<Metrics>,
    key_val_data_max_len: Option<usize>,
}
Expand description

A runtime for background asynchronous maintenance of stored data.

Fields

blob: Arc<BlobCache<B>>metrics: Arc<Metrics>key_val_data_max_len: Option<usize>

Implementations

Returns a new Maintainer.

Asynchronously runs the requested compaction on the tokio blocking work pool.

Physically and logically compact two trace batches together

This function performs trace compaction with bounded memory usage by:

  1. Only ever keeping one BlobTraceBatchPart in memory from each of the two batches being compacted at a time.
  2. Only keeping one ColumnarRecords worth of merged data in memory at a time.
  3. Performing roughly a linear merge on the two trace batch parts and as we read data from each trace batch, figuring out the upper bound on data that can be compacted, consolidating and merging that, and then placing that in a ColumnarRecords to await being written out as an output BlobTraceBatchPart.

(3). is best explained with an example. If we are compacting two trace batches, A which has two parts, one with keys [0, 10) and one with keys [10, 20) and B has three parts, with keys [0, 15), [15, 30) and [30, 45) respectively then while compacting when we observe the first batches from A and B which have keys [0, 10) and [0, 15) respectively, we cannot merge all of both batches together.

We can only merge the subset of keys in [0, 10), as its possible that subsequent parts in A have relevant keys in [10, 15).

TODO(rkhaitan): We don’t do the real linear merge as compaction requires us to both forward times and consolidate multiple updates at the forwarded times. I believe that doing so is possible in linear time, but doing so in a single pass was complicated enough that it is left for future work.

Read the data from the trace batch part at key into updates.

Drain all records from updates with a (key, val) <= threshold into buffer.

TODO: this could be replaced with a drain_filter if that wasn’t experimental.

Write a BlobTraceBatchPart containing updates into Blob.

Returns the key and size in bytes for the trace batch part.

Trait Implementations

Formats the value using the given formatter. Read more

Auto Trait Implementations

Blanket Implementations

Gets the TypeId of self. Read more

Immutably borrows from an owned value. Read more

Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Attaches the provided Context to this type, returning a WithContext wrapper. Read more

Attaches the current Context to this type, returning a WithContext wrapper. Read more

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more

Instruments this type with the current Span, returning an Instrumented wrapper. Read more

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Wrap the input message T in a tonic::Request

Should always be Self

The type returned in the event of a conversion error.

Performs the conversion.

The type returned in the event of a conversion error.

Performs the conversion.

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more