Struct persist::indexed::cache::BlobCache [−][src]
pub struct BlobCache<B> {
build_version: Version,
metrics: Arc<Metrics>,
blob: Arc<Mutex<B>>,
async_runtime: Arc<AsyncRuntime>,
cache: BlobCacheInner,
prev_meta_len: u64,
}
Expand description
A disk-backed cache for objects in Blob storage.
The data for the objects in the cache is stored on disk, mmap’d, and a validated handle is stored in-memory to avoid repeatedly decoding it.
TODO: Add a limit to bound how much disk this cache can use. The Arc
return type for get_batch
seems correct, but means that a bad user could
starve the cache by indefinitely holding handles. The Arcs could be made
into weak references so the cache could forcefully reclaim the backing data,
but this is going to make performance of using the cached batches
unpredictable. I think we probably want a soft limit and a hard limit where
the soft limit does some alerting and the hard limit starts blocking (or
erroring) until disk space frees up.
Fields
build_version: Version
metrics: Arc<Metrics>
blob: Arc<Mutex<B>>
async_runtime: Arc<AsyncRuntime>
cache: BlobCacheInner
prev_meta_len: u64
Implementations
Returns a new, empty cache for the given Blob storage.
Synchronously closes the cache, releasing exclusive-writer locks and causing all future commands to error.
This method is idempotent. Returns true if the blob had not previously been closed.
fn fetch_unsealed_batch_sync(
&self,
key: &str,
hint: CacheHint
) -> Result<Arc<BlobUnsealedBatch>, Error>
fn fetch_unsealed_batch_sync(
&self,
key: &str,
hint: CacheHint
) -> Result<Arc<BlobUnsealedBatch>, Error>
Synchronously fetches the batch for the given key.
Asynchronously returns the batch for the given key, fetching in another thread if it’s not already in the cache.
fn fetch_trace_batch_sync(
&self,
key: &str,
hint: CacheHint
) -> Result<Arc<BlobTraceBatch>, Error>
fn fetch_trace_batch_sync(
&self,
key: &str,
hint: CacheHint
) -> Result<Arc<BlobTraceBatch>, Error>
Synchronously fetches the batch for the given key.
Asynchronously returns the batch for the given key, fetching in another thread if it’s not already in the cache.
Fetches metadata about what batches are in Blob storage.
pub fn set_unsealed_batch(
&mut self,
key: String,
batch: BlobUnsealedBatch
) -> Result<(ProtoBatchFormat, u64), Error>
pub fn set_unsealed_batch(
&mut self,
key: String,
batch: BlobUnsealedBatch
) -> Result<(ProtoBatchFormat, u64), Error>
Writes a batch to backing Blob storage.
Returns the size of the encoded blob value in bytes.
Removes a batch from both Blob storage and the local cache.
pub fn set_trace_batch(
&self,
key: String,
batch: BlobTraceBatch
) -> Result<(ProtoBatchFormat, u64), Error>
pub fn set_trace_batch(
&self,
key: String,
batch: BlobTraceBatch
) -> Result<(ProtoBatchFormat, u64), Error>
Writes a batch to backing Blob storage.
Returns the size of the encoded blob value in bytes.
Removes a batch from both Blob storage and the local cache.
Overwrites metadata about what batches are in Blob storage.
Trait Implementations
Auto Trait Implementations
impl<B> !RefUnwindSafe for BlobCache<B>
impl<B> !UnwindSafe for BlobCache<B>
Blanket Implementations
Mutably borrows from an owned value. Read more
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more