Struct mz_persist_client::write::WriteHandle
source · pub struct WriteHandle<K: Codec, V: Codec, T, D> {
pub(crate) cfg: PersistConfig,
pub(crate) metrics: Arc<Metrics>,
pub(crate) machine: Machine<K, V, T, D>,
pub(crate) gc: GarbageCollector<K, V, T, D>,
pub(crate) compact: Option<Compactor<K, V, T, D>>,
pub(crate) blob: Arc<dyn Blob>,
pub(crate) isolated_runtime: Arc<IsolatedRuntime>,
pub(crate) writer_id: WriterId,
pub(crate) debug_state: HandleDebugState,
pub(crate) write_schemas: Schemas<K, V>,
pub(crate) upper: Antichain<T>,
expire_fn: Option<ExpireFn>,
}
Expand description
A “capability” granting the ability to apply updates to some shard at times
greater or equal to self.upper()
.
All async methods on ReadHandle retry for as long as they are able, but the returned std::future::Futures implement “cancel on drop” semantics. This means that callers can add a timeout using tokio::time::timeout or tokio::time::timeout_at.
tokio::time::timeout(timeout, write.fetch_recent_upper()).await
Fields§
§cfg: PersistConfig
§metrics: Arc<Metrics>
§machine: Machine<K, V, T, D>
§gc: GarbageCollector<K, V, T, D>
§compact: Option<Compactor<K, V, T, D>>
§blob: Arc<dyn Blob>
§isolated_runtime: Arc<IsolatedRuntime>
§writer_id: WriterId
§debug_state: HandleDebugState
§write_schemas: Schemas<K, V>
§upper: Antichain<T>
§expire_fn: Option<ExpireFn>
Implementations§
source§impl<K, V, T, D> WriteHandle<K, V, T, D>
impl<K, V, T, D> WriteHandle<K, V, T, D>
pub(crate) fn new( cfg: PersistConfig, metrics: Arc<Metrics>, machine: Machine<K, V, T, D>, gc: GarbageCollector<K, V, T, D>, blob: Arc<dyn Blob>, writer_id: WriterId, purpose: &str, write_schemas: Schemas<K, V>, ) -> Self
sourcepub fn from_read(read: &ReadHandle<K, V, T, D>, purpose: &str) -> Self
pub fn from_read(read: &ReadHandle<K, V, T, D>, purpose: &str) -> Self
Creates a WriteHandle for the same shard from an existing ReadHandle.
sourcepub fn upper(&self) -> &Antichain<T>
pub fn upper(&self) -> &Antichain<T>
A cached version of the shard-global upper
frontier.
This is the most recent upper discovered by this handle. It is
potentially more stale than Self::shared_upper but is lock-free and
allocation-free. This will always be less or equal to the shard-global
upper
.
A less-stale cached version of the shard-global upper
frontier.
This is the most recently known upper for this shard process-wide, but
unlike Self::upper it requires a mutex and a clone. This will always be
less or equal to the shard-global upper
.
sourcepub async fn fetch_recent_upper(&mut self) -> &Antichain<T>
pub async fn fetch_recent_upper(&mut self) -> &Antichain<T>
Fetches and returns a recent shard-global upper
. Importantly, this operation is
linearized with write operations.
This requires fetching the latest state from consensus and is therefore a potentially expensive operation.
sourcepub async fn append<SB, KB, VB, TB, DB, I>(
&mut self,
updates: I,
lower: Antichain<T>,
upper: Antichain<T>,
) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>
pub async fn append<SB, KB, VB, TB, DB, I>( &mut self, updates: I, lower: Antichain<T>, upper: Antichain<T>, ) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>
Applies updates
to this shard and downgrades this handle’s upper to
upper
.
The innermost Result
is Ok
if the updates were successfully written.
If not, an Upper
err containing the current writer upper is returned.
If that happens, we also update our local upper
to match the current
upper. This is useful in cases where a timeout happens in between a
successful write and returning that to the client.
In contrast to Self::compare_and_append, multiple WriteHandles may be used concurrently to write to the same shard, but in this case, the data being written must be identical (in the sense of “definite”-ness). It’s intended for replicated use by source ingestion, sinks, etc.
All times in updates
must be greater or equal to lower
and not
greater or equal to upper
. A upper
of the empty antichain “finishes”
this shard, promising that no more data is ever incoming.
updates
may be empty, which allows for downgrading upper
to
communicate progress. It is possible to call this with upper
equal to
self.upper()
and an empty updates
(making the call a no-op).
This uses a bounded amount of memory, even when updates
is very large.
Individual records, however, should be small enough that we can
reasonably chunk them up: O(KB) is definitely fine, O(MB) come talk to
us.
The clunky multi-level Result is to enable more obvious error handling in the caller. See http://sled.rs/errors.html for details.
sourcepub async fn compare_and_append<SB, KB, VB, TB, DB, I>(
&mut self,
updates: I,
expected_upper: Antichain<T>,
new_upper: Antichain<T>,
) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>
pub async fn compare_and_append<SB, KB, VB, TB, DB, I>( &mut self, updates: I, expected_upper: Antichain<T>, new_upper: Antichain<T>, ) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>
Applies updates
to this shard and downgrades this handle’s upper to
new_upper
iff the current global upper of this shard is
expected_upper
.
The innermost Result
is Ok
if the updates were successfully written.
If not, an Upper
err containing the current global upper is returned.
In contrast to Self::append, this linearizes mutations from all writers. It’s intended for use as an atomic primitive for timestamp bindings, SQL tables, etc.
All times in updates
must be greater or equal to expected_upper
and
not greater or equal to new_upper
. A new_upper
of the empty
antichain “finishes” this shard, promising that no more data is ever
incoming.
updates
may be empty, which allows for downgrading upper
to
communicate progress. It is possible to heartbeat a writer lease by
calling this with new_upper
equal to self.upper()
and an empty
updates
(making the call a no-op).
This uses a bounded amount of memory, even when updates
is very large.
Individual records, however, should be small enough that we can
reasonably chunk them up: O(KB) is definitely fine, O(MB) come talk to
us.
The clunky multi-level Result is to enable more obvious error handling in the caller. See http://sled.rs/errors.html for details.
sourcepub async fn append_batch(
&mut self,
batch: Batch<K, V, T, D>,
lower: Antichain<T>,
upper: Antichain<T>,
) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>
pub async fn append_batch( &mut self, batch: Batch<K, V, T, D>, lower: Antichain<T>, upper: Antichain<T>, ) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>
Appends the batch of updates to the shard and downgrades this handle’s
upper to upper
.
The innermost Result
is Ok
if the updates were successfully written.
If not, an Upper
err containing the current writer upper is returned.
If that happens, we also update our local upper
to match the current
upper. This is useful in cases where a timeout happens in between a
successful write and returning that to the client.
In contrast to Self::compare_and_append_batch, multiple WriteHandles may be used concurrently to write to the same shard, but in this case, the data being written must be identical (in the sense of “definite”-ness). It’s intended for replicated use by source ingestion, sinks, etc.
A upper
of the empty antichain “finishes” this shard, promising that
no more data is ever incoming.
The batch may be empty, which allows for downgrading upper
to
communicate progress. It is possible to heartbeat a writer lease by
calling this with upper
equal to self.upper()
and an empty updates
(making the call a no-op).
The clunky multi-level Result is to enable more obvious error handling in the caller. See http://sled.rs/errors.html for details.
sourcepub async fn compare_and_append_batch(
&mut self,
batches: &mut [&mut Batch<K, V, T, D>],
expected_upper: Antichain<T>,
new_upper: Antichain<T>,
) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>
pub async fn compare_and_append_batch( &mut self, batches: &mut [&mut Batch<K, V, T, D>], expected_upper: Antichain<T>, new_upper: Antichain<T>, ) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>
Appends the batch of updates to the shard and downgrades this handle’s
upper to new_upper
iff the current global upper of this shard is
expected_upper
.
The innermost Result
is Ok
if the batch was successfully written. If
not, an Upper
err containing the current global upper is returned.
In contrast to Self::append_batch, this linearizes mutations from all writers. It’s intended for use as an atomic primitive for timestamp bindings, SQL tables, etc.
A new_upper
of the empty antichain “finishes” this shard, promising
that no more data is ever incoming.
The batch may be empty, which allows for downgrading upper
to
communicate progress. It is possible to heartbeat a writer lease by
calling this with new_upper
equal to self.upper()
and an empty
updates
(making the call a no-op).
IMPORTANT: In case of an erroneous result the caller is responsible for
the lifecycle of the batch
. It can be deleted or it can be used to
retry with adjusted frontiers.
The clunky multi-level Result is to enable more obvious error handling in the caller. See http://sled.rs/errors.html for details.
sourcepub fn batch_from_transmittable_batch(
&self,
batch: ProtoBatch,
) -> Batch<K, V, T, D>
pub fn batch_from_transmittable_batch( &self, batch: ProtoBatch, ) -> Batch<K, V, T, D>
Turns the given ProtoBatch
back into a Batch
which can be used
to append it to this shard.
sourcepub fn builder(&mut self, lower: Antichain<T>) -> BatchBuilder<K, V, T, D>
pub fn builder(&mut self, lower: Antichain<T>) -> BatchBuilder<K, V, T, D>
Returns a BatchBuilder that can be used to write a batch of updates to blob storage which can then be appended to this shard using Self::compare_and_append_batch or Self::append_batch.
It is correct to create an empty batch, which allows for downgrading
upper
to communicate progress. (see Self::compare_and_append_batch
or Self::append_batch)
The builder uses a bounded amount of memory, even when the number of updates is very large. Individual records, however, should be small enough that we can reasonably chunk them up: O(KB) is definitely fine, O(MB) come talk to us.
sourcepub async fn batch<SB, KB, VB, TB, DB, I>(
&mut self,
updates: I,
lower: Antichain<T>,
upper: Antichain<T>,
) -> Result<Batch<K, V, T, D>, InvalidUsage<T>>
pub async fn batch<SB, KB, VB, TB, DB, I>( &mut self, updates: I, lower: Antichain<T>, upper: Antichain<T>, ) -> Result<Batch<K, V, T, D>, InvalidUsage<T>>
Uploads the given updates
as one Batch
to the blob store and returns
a handle to the batch.
sourcepub async fn wait_for_upper_past(&mut self, frontier: &Antichain<T>)
pub async fn wait_for_upper_past(&mut self, frontier: &Antichain<T>)
Blocks until the given frontier
is less than the upper of the shard.
sourcepub async fn expire(self)
pub async fn expire(self)
Politely expires this writer, releasing any associated state.
There is a best-effort impl in Drop to expire a writer that wasn’t explictly expired with this method. When possible, explicit expiry is still preferred because the Drop one is best effort and is dependant on a tokio Handle being available in the TLC at the time of drop (which is a bit subtle). Also, explicit expiry allows for control over when it happens.
fn expire_fn( machine: Machine<K, V, T, D>, gc: GarbageCollector<K, V, T, D>, writer_id: WriterId, ) -> ExpireFn
Trait Implementations§
Auto Trait Implementations§
impl<K, V, T, D> Freeze for WriteHandle<K, V, T, D>where
T: Freeze,
impl<K, V, T, D> !RefUnwindSafe for WriteHandle<K, V, T, D>
impl<K, V, T, D> Send for WriteHandle<K, V, T, D>
impl<K, V, T, D> Sync for WriteHandle<K, V, T, D>
impl<K, V, T, D> Unpin for WriteHandle<K, V, T, D>where
T: Unpin,
impl<K, V, T, D> !UnwindSafe for WriteHandle<K, V, T, D>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> FutureExt for T
impl<T> FutureExt for T
source§fn with_context(self, otel_cx: Context) -> WithContext<Self>
fn with_context(self, otel_cx: Context) -> WithContext<Self>
source§fn with_current_context(self) -> WithContext<Self>
fn with_current_context(self) -> WithContext<Self>
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request
source§impl<T> Pointable for T
impl<T> Pointable for T
source§impl<P, R> ProtoType<R> for Pwhere
R: RustType<P>,
impl<P, R> ProtoType<R> for Pwhere
R: RustType<P>,
source§fn into_rust(self) -> Result<R, TryFromProtoError>
fn into_rust(self) -> Result<R, TryFromProtoError>
RustType::from_proto
.source§fn from_rust(rust: &R) -> P
fn from_rust(rust: &R) -> P
RustType::into_proto
.source§impl<'a, S, T> Semigroup<&'a S> for Twhere
T: Semigroup<S>,
impl<'a, S, T> Semigroup<&'a S> for Twhere
T: Semigroup<S>,
source§fn plus_equals(&mut self, rhs: &&'a S)
fn plus_equals(&mut self, rhs: &&'a S)
std::ops::AddAssign
, for types that do not implement AddAssign
.