Struct mz_txn_wal::txns::DataWriteApply
source · pub(crate) struct DataWriteApply<K: Codec, V: Codec, T, D> {
client: Arc<PersistClient>,
apply_ensure_schema_match: ConfigValHandle<bool>,
pub(crate) wrapped: WriteHandle<K, V, T, D>,
}
Expand description
A newtype wrapper around WriteHandle indicating that it can alter the schema its using to match the one in the batches being appended.
When a batch is committed to txn-wal, it contains metadata about which
schemas were used to encode the data in it. Txn-wal then uses this info to
make sure that in TxnsHandle::apply_le, that the compare_and_append
call
happens on a handle with the same schema. This is accomplished by querying
the persist schema registry.
Fields§
§client: Arc<PersistClient>
§apply_ensure_schema_match: ConfigValHandle<bool>
§wrapped: WriteHandle<K, V, T, D>
Implementations§
Methods from Deref<Target = WriteHandle<K, V, T, D>>§
sourcepub fn upper(&self) -> &Antichain<T>
pub fn upper(&self) -> &Antichain<T>
A cached version of the shard-global upper
frontier.
This is the most recent upper discovered by this handle. It is
potentially more stale than Self::shared_upper but is lock-free and
allocation-free. This will always be less or equal to the shard-global
upper
.
A less-stale cached version of the shard-global upper
frontier.
This is the most recently known upper for this shard process-wide, but
unlike Self::upper it requires a mutex and a clone. This will always be
less or equal to the shard-global upper
.
sourcepub async fn fetch_recent_upper(&mut self) -> &Antichain<T>
pub async fn fetch_recent_upper(&mut self) -> &Antichain<T>
Fetches and returns a recent shard-global upper
. Importantly, this operation is
linearized with write operations.
This requires fetching the latest state from consensus and is therefore a potentially expensive operation.
sourcepub async fn append<SB, KB, VB, TB, DB, I>(
&mut self,
updates: I,
lower: Antichain<T>,
upper: Antichain<T>,
) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>
pub async fn append<SB, KB, VB, TB, DB, I>( &mut self, updates: I, lower: Antichain<T>, upper: Antichain<T>, ) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>
Applies updates
to this shard and downgrades this handle’s upper to
upper
.
The innermost Result
is Ok
if the updates were successfully written.
If not, an Upper
err containing the current writer upper is returned.
If that happens, we also update our local upper
to match the current
upper. This is useful in cases where a timeout happens in between a
successful write and returning that to the client.
In contrast to Self::compare_and_append, multiple WriteHandles may be used concurrently to write to the same shard, but in this case, the data being written must be identical (in the sense of “definite”-ness). It’s intended for replicated use by source ingestion, sinks, etc.
All times in updates
must be greater or equal to lower
and not
greater or equal to upper
. A upper
of the empty antichain “finishes”
this shard, promising that no more data is ever incoming.
updates
may be empty, which allows for downgrading upper
to
communicate progress. It is possible to call this with upper
equal to
self.upper()
and an empty updates
(making the call a no-op).
This uses a bounded amount of memory, even when updates
is very large.
Individual records, however, should be small enough that we can
reasonably chunk them up: O(KB) is definitely fine, O(MB) come talk to
us.
The clunky multi-level Result is to enable more obvious error handling in the caller. See http://sled.rs/errors.html for details.
sourcepub async fn compare_and_append<SB, KB, VB, TB, DB, I>(
&mut self,
updates: I,
expected_upper: Antichain<T>,
new_upper: Antichain<T>,
) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>
pub async fn compare_and_append<SB, KB, VB, TB, DB, I>( &mut self, updates: I, expected_upper: Antichain<T>, new_upper: Antichain<T>, ) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>
Applies updates
to this shard and downgrades this handle’s upper to
new_upper
iff the current global upper of this shard is
expected_upper
.
The innermost Result
is Ok
if the updates were successfully written.
If not, an Upper
err containing the current global upper is returned.
In contrast to Self::append, this linearizes mutations from all writers. It’s intended for use as an atomic primitive for timestamp bindings, SQL tables, etc.
All times in updates
must be greater or equal to expected_upper
and
not greater or equal to new_upper
. A new_upper
of the empty
antichain “finishes” this shard, promising that no more data is ever
incoming.
updates
may be empty, which allows for downgrading upper
to
communicate progress. It is possible to heartbeat a writer lease by
calling this with new_upper
equal to self.upper()
and an empty
updates
(making the call a no-op).
This uses a bounded amount of memory, even when updates
is very large.
Individual records, however, should be small enough that we can
reasonably chunk them up: O(KB) is definitely fine, O(MB) come talk to
us.
The clunky multi-level Result is to enable more obvious error handling in the caller. See http://sled.rs/errors.html for details.
sourcepub async fn append_batch(
&mut self,
batch: Batch<K, V, T, D>,
lower: Antichain<T>,
upper: Antichain<T>,
) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>
pub async fn append_batch( &mut self, batch: Batch<K, V, T, D>, lower: Antichain<T>, upper: Antichain<T>, ) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>
Appends the batch of updates to the shard and downgrades this handle’s
upper to upper
.
The innermost Result
is Ok
if the updates were successfully written.
If not, an Upper
err containing the current writer upper is returned.
If that happens, we also update our local upper
to match the current
upper. This is useful in cases where a timeout happens in between a
successful write and returning that to the client.
In contrast to Self::compare_and_append_batch, multiple WriteHandles may be used concurrently to write to the same shard, but in this case, the data being written must be identical (in the sense of “definite”-ness). It’s intended for replicated use by source ingestion, sinks, etc.
A upper
of the empty antichain “finishes” this shard, promising that
no more data is ever incoming.
The batch may be empty, which allows for downgrading upper
to
communicate progress. It is possible to heartbeat a writer lease by
calling this with upper
equal to self.upper()
and an empty updates
(making the call a no-op).
The clunky multi-level Result is to enable more obvious error handling in the caller. See http://sled.rs/errors.html for details.
sourcepub async fn compare_and_append_batch(
&mut self,
batches: &mut [&mut Batch<K, V, T, D>],
expected_upper: Antichain<T>,
new_upper: Antichain<T>,
) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>
pub async fn compare_and_append_batch( &mut self, batches: &mut [&mut Batch<K, V, T, D>], expected_upper: Antichain<T>, new_upper: Antichain<T>, ) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>
Appends the batch of updates to the shard and downgrades this handle’s
upper to new_upper
iff the current global upper of this shard is
expected_upper
.
The innermost Result
is Ok
if the batch was successfully written. If
not, an Upper
err containing the current global upper is returned.
In contrast to Self::append_batch, this linearizes mutations from all writers. It’s intended for use as an atomic primitive for timestamp bindings, SQL tables, etc.
A new_upper
of the empty antichain “finishes” this shard, promising
that no more data is ever incoming.
The batch may be empty, which allows for downgrading upper
to
communicate progress. It is possible to heartbeat a writer lease by
calling this with new_upper
equal to self.upper()
and an empty
updates
(making the call a no-op).
IMPORTANT: In case of an erroneous result the caller is responsible for
the lifecycle of the batch
. It can be deleted or it can be used to
retry with adjusted frontiers.
The clunky multi-level Result is to enable more obvious error handling in the caller. See http://sled.rs/errors.html for details.
sourcepub fn batch_from_transmittable_batch(
&self,
batch: ProtoBatch,
) -> Batch<K, V, T, D>
pub fn batch_from_transmittable_batch( &self, batch: ProtoBatch, ) -> Batch<K, V, T, D>
Turns the given ProtoBatch
back into a Batch
which can be used
to append it to this shard.
sourcepub fn builder(&self, lower: Antichain<T>) -> BatchBuilder<K, V, T, D>
pub fn builder(&self, lower: Antichain<T>) -> BatchBuilder<K, V, T, D>
Returns a BatchBuilder that can be used to write a batch of updates to blob storage which can then be appended to this shard using Self::compare_and_append_batch or Self::append_batch.
It is correct to create an empty batch, which allows for downgrading
upper
to communicate progress. (see Self::compare_and_append_batch
or Self::append_batch)
The builder uses a bounded amount of memory, even when the number of updates is very large. Individual records, however, should be small enough that we can reasonably chunk them up: O(KB) is definitely fine, O(MB) come talk to us.
sourcepub async fn batch<SB, KB, VB, TB, DB, I>(
&mut self,
updates: I,
lower: Antichain<T>,
upper: Antichain<T>,
) -> Result<Batch<K, V, T, D>, InvalidUsage<T>>
pub async fn batch<SB, KB, VB, TB, DB, I>( &mut self, updates: I, lower: Antichain<T>, upper: Antichain<T>, ) -> Result<Batch<K, V, T, D>, InvalidUsage<T>>
Uploads the given updates
as one Batch
to the blob store and returns
a handle to the batch.
sourcepub async fn wait_for_upper_past(&mut self, frontier: &Antichain<T>)
pub async fn wait_for_upper_past(&mut self, frontier: &Antichain<T>)
Blocks until the given frontier
is less than the upper of the shard.
Trait Implementations§
source§impl<K: Debug + Codec, V: Debug + Codec, T: Debug, D: Debug> Debug for DataWriteApply<K, V, T, D>
impl<K: Debug + Codec, V: Debug + Codec, T: Debug, D: Debug> Debug for DataWriteApply<K, V, T, D>
Auto Trait Implementations§
impl<K, V, T, D> Freeze for DataWriteApply<K, V, T, D>where
T: Freeze,
impl<K, V, T, D> !RefUnwindSafe for DataWriteApply<K, V, T, D>
impl<K, V, T, D> Send for DataWriteApply<K, V, T, D>
impl<K, V, T, D> Sync for DataWriteApply<K, V, T, D>
impl<K, V, T, D> Unpin for DataWriteApply<K, V, T, D>where
T: Unpin,
impl<K, V, T, D> !UnwindSafe for DataWriteApply<K, V, T, D>
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> FutureExt for T
impl<T> FutureExt for T
source§fn with_context(self, otel_cx: Context) -> WithContext<Self>
fn with_context(self, otel_cx: Context) -> WithContext<Self>
source§fn with_current_context(self) -> WithContext<Self>
fn with_current_context(self) -> WithContext<Self>
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request
source§impl<T> Pointable for T
impl<T> Pointable for T
source§impl<P, R> ProtoType<R> for Pwhere
R: RustType<P>,
impl<P, R> ProtoType<R> for Pwhere
R: RustType<P>,
source§fn into_rust(self) -> Result<R, TryFromProtoError>
fn into_rust(self) -> Result<R, TryFromProtoError>
RustType::from_proto
.source§fn from_rust(rust: &R) -> P
fn from_rust(rust: &R) -> P
RustType::into_proto
.source§impl<'a, S, T> Semigroup<&'a S> for Twhere
T: Semigroup<S>,
impl<'a, S, T> Semigroup<&'a S> for Twhere
T: Semigroup<S>,
source§fn plus_equals(&mut self, rhs: &&'a S)
fn plus_equals(&mut self, rhs: &&'a S)
std::ops::AddAssign
, for types that do not implement AddAssign
.