pub struct WriteHandle<K, V, T, D>where
    T: Timestamp + Lattice + Codec64,
    K: Debug + Codec,
    V: Debug + Codec,
    D: Semigroup + Codec64 + Send + Sync,{
Show 13 fields pub(crate) cfg: PersistConfig, pub(crate) metrics: Arc<Metrics>, pub(crate) machine: Machine<K, V, T, D>, pub(crate) gc: GarbageCollector<K, V, T, D>, pub(crate) compact: Option<Compactor<K, V, T, D>>, pub(crate) blob: Arc<dyn Blob + Send + Sync>, pub(crate) cpu_heavy_runtime: Arc<CpuHeavyRuntime>, pub(crate) writer_id: WriterId, pub(crate) schemas: Schemas<K, V>, pub(crate) upper: Antichain<T>, pub(crate) last_heartbeat: EpochMillis, explicitly_expired: bool, pub(crate) heartbeat_task: Option<JoinHandle<()>>,
}
Expand description

A “capability” granting the ability to apply updates to some shard at times greater or equal to self.upper().

All async methods on ReadHandle retry for as long as they are able, but the returned std::future::Futures implement “cancel on drop” semantics. This means that callers can add a timeout using tokio::time::timeout or tokio::time::timeout_at.

tokio::time::timeout(timeout, write.fetch_recent_upper()).await

Fields§

§cfg: PersistConfig§metrics: Arc<Metrics>§machine: Machine<K, V, T, D>§gc: GarbageCollector<K, V, T, D>§compact: Option<Compactor<K, V, T, D>>§blob: Arc<dyn Blob + Send + Sync>§cpu_heavy_runtime: Arc<CpuHeavyRuntime>§writer_id: WriterId§schemas: Schemas<K, V>§upper: Antichain<T>§last_heartbeat: EpochMillis§explicitly_expired: bool§heartbeat_task: Option<JoinHandle<()>>

Implementations§

source§

impl<K, V, T, D> WriteHandle<K, V, T, D>where K: Debug + Codec, V: Debug + Codec, T: Timestamp + Lattice + Codec64, D: Semigroup + Codec64 + Send + Sync,

source

pub(crate) async fn new( cfg: PersistConfig, metrics: Arc<Metrics>, machine: Machine<K, V, T, D>, gc: GarbageCollector<K, V, T, D>, compact: Option<Compactor<K, V, T, D>>, blob: Arc<dyn Blob + Send + Sync>, cpu_heavy_runtime: Arc<CpuHeavyRuntime>, writer_id: WriterId, schemas: Schemas<K, V>, upper: Antichain<T>, last_heartbeat: EpochMillis ) -> Self

source

pub fn upper(&self) -> &Antichain<T>

A cached version of the shard-global upper frontier.

This will always be less or equal to the shard-global upper.

source

pub async fn fetch_recent_upper(&mut self) -> &Antichain<T>

Fetches and returns a recent shard-global upper. Importantly, this operation is not linearized with other write operations.

This requires fetching the latest state from consensus and is therefore a potentially expensive operation.

source

pub async fn append<SB, KB, VB, TB, DB, I>( &mut self, updates: I, lower: Antichain<T>, upper: Antichain<T> ) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>where SB: Borrow<((KB, VB), TB, DB)>, KB: Borrow<K>, VB: Borrow<V>, TB: Borrow<T>, DB: Borrow<D>, I: IntoIterator<Item = SB>, D: Send + Sync,

Applies updates to this shard and downgrades this handle’s upper to upper.

The innermost Result is Ok if the updates were successfully written. If not, an Upper err containing the current writer upper is returned. If that happens, we also update our local upper to match the current upper. This is useful in cases where a timeout happens in between a successful write and returning that to the client.

In contrast to Self::compare_and_append, multiple WriteHandles may be used concurrently to write to the same shard, but in this case, the data being written must be identical (in the sense of “definite”-ness). It’s intended for replicated use by source ingestion, sinks, etc.

All times in updates must be greater or equal to lower and not greater or equal to upper. A upper of the empty antichain “finishes” this shard, promising that no more data is ever incoming.

updates may be empty, which allows for downgrading upper to communicate progress. It is possible to heartbeat a writer lease by calling this with upper equal to self.upper() and an empty updates (making the call a no-op).

This uses a bounded amount of memory, even when updates is very large. Individual records, however, should be small enough that we can reasonably chunk them up: O(KB) is definitely fine, O(MB) come talk to us.

The clunky multi-level Result is to enable more obvious error handling in the caller. See http://sled.rs/errors.html for details.

source

pub async fn compare_and_append<SB, KB, VB, TB, DB, I>( &mut self, updates: I, expected_upper: Antichain<T>, new_upper: Antichain<T> ) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>where SB: Borrow<((KB, VB), TB, DB)>, KB: Borrow<K>, VB: Borrow<V>, TB: Borrow<T>, DB: Borrow<D>, I: IntoIterator<Item = SB>, D: Send + Sync,

Applies updates to this shard and downgrades this handle’s upper to new_upper iff the current global upper of this shard is expected_upper.

The innermost Result is Ok if the updates were successfully written. If not, an Upper err containing the current global upper is returned.

In contrast to Self::append, this linearizes mutations from all writers. It’s intended for use as an atomic primitive for timestamp bindings, SQL tables, etc.

All times in updates must be greater or equal to expected_upper and not greater or equal to new_upper. A new_upper of the empty antichain “finishes” this shard, promising that no more data is ever incoming.

updates may be empty, which allows for downgrading upper to communicate progress. It is possible to heartbeat a writer lease by calling this with new_upper equal to self.upper() and an empty updates (making the call a no-op).

This uses a bounded amount of memory, even when updates is very large. Individual records, however, should be small enough that we can reasonably chunk them up: O(KB) is definitely fine, O(MB) come talk to us.

The clunky multi-level Result is to enable more obvious error handling in the caller. See http://sled.rs/errors.html for details.

source

pub async fn append_batch( &mut self, batch: Batch<K, V, T, D>, lower: Antichain<T>, upper: Antichain<T> ) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>where D: Send + Sync,

Appends the batch of updates to the shard and downgrades this handle’s upper to upper.

The innermost Result is Ok if the updates were successfully written. If not, an Upper err containing the current writer upper is returned. If that happens, we also update our local upper to match the current upper. This is useful in cases where a timeout happens in between a successful write and returning that to the client.

In contrast to Self::compare_and_append_batch, multiple WriteHandles may be used concurrently to write to the same shard, but in this case, the data being written must be identical (in the sense of “definite”-ness). It’s intended for replicated use by source ingestion, sinks, etc.

A upper of the empty antichain “finishes” this shard, promising that no more data is ever incoming.

The batch may be empty, which allows for downgrading upper to communicate progress. It is possible to heartbeat a writer lease by calling this with upper equal to self.upper() and an empty updates (making the call a no-op).

The clunky multi-level Result is to enable more obvious error handling in the caller. See http://sled.rs/errors.html for details.

source

pub async fn compare_and_append_batch( &mut self, batches: &mut [&mut Batch<K, V, T, D>], expected_upper: Antichain<T>, new_upper: Antichain<T> ) -> Result<Result<(), UpperMismatch<T>>, InvalidUsage<T>>where D: Send + Sync,

Appends the batch of updates to the shard and downgrades this handle’s upper to new_upper iff the current global upper of this shard is expected_upper.

The innermost Result is Ok if the batch was successfully written. If not, an Upper err containing the current global upper is returned.

In contrast to Self::append_batch, this linearizes mutations from all writers. It’s intended for use as an atomic primitive for timestamp bindings, SQL tables, etc.

A new_upper of the empty antichain “finishes” this shard, promising that no more data is ever incoming.

The batch may be empty, which allows for downgrading upper to communicate progress. It is possible to heartbeat a writer lease by calling this with new_upper equal to self.upper() and an empty updates (making the call a no-op).

IMPORTANT: In case of an erroneous result the caller is responsible for the lifecycle of the batch. It can be deleted or it can be used to retry with adjusted frontiers.

The clunky multi-level Result is to enable more obvious error handling in the caller. See http://sled.rs/errors.html for details.

source

pub fn batch_from_hollow_batch( &self, hollow: WriterEnrichedHollowBatch<T> ) -> Batch<K, V, T, D>

Turns the given WriterEnrichedHollowBatch back into a Batch which can be used to append it to this shard.

source

pub fn builder(&mut self, lower: Antichain<T>) -> BatchBuilder<K, V, T, D>

Returns a BatchBuilder that can be used to write a batch of updates to blob storage which can then be appended to this shard using Self::compare_and_append_batch or Self::append_batch.

It is correct to create an empty batch, which allows for downgrading upper to communicate progress. (see Self::compare_and_append_batch or Self::append_batch)

The builder uses a bounded amount of memory, even when the number of updates is very large. Individual records, however, should be small enough that we can reasonably chunk them up: O(KB) is definitely fine, O(MB) come talk to us.

source

pub async fn batch<SB, KB, VB, TB, DB, I>( &mut self, updates: I, lower: Antichain<T>, upper: Antichain<T> ) -> Result<Batch<K, V, T, D>, InvalidUsage<T>>where SB: Borrow<((KB, VB), TB, DB)>, KB: Borrow<K>, VB: Borrow<V>, TB: Borrow<T>, DB: Borrow<D>, I: IntoIterator<Item = SB>,

Uploads the given updates as one Batch to the blob store and returns a handle to the batch.

source

pub async fn maybe_heartbeat_writer(&mut self)

Heartbeats the writer lease if necessary.

This is an internally rate limited helper, designed to allow users to call it as frequently as they like. Call this on some interval that is “frequent” compared to PersistConfig::writer_lease_duration

source

pub async fn expire(self)

Politely expires this writer, releasing its lease.

There is a best-effort impl in Drop to expire a writer that wasn’t explictly expired with this method. When possible, explicit expiry is still preferred because the Drop one is best effort and is dependant on a tokio Handle being available in the TLC at the time of drop (which is a bit subtle). Also, explicit expiry allows for control over when it happens.

Trait Implementations§

source§

impl<K, V, T, D> Debug for WriteHandle<K, V, T, D>where T: Timestamp + Lattice + Codec64 + Debug, K: Debug + Codec + Debug, V: Debug + Codec + Debug, D: Semigroup + Codec64 + Send + Sync + Debug,

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl<K, V, T, D> Drop for WriteHandle<K, V, T, D>where T: Timestamp + Lattice + Codec64, K: Debug + Codec, V: Debug + Codec, D: Semigroup + Codec64 + Send + Sync,

source§

fn drop(&mut self)

Executes the destructor for this type. Read more

Auto Trait Implementations§

§

impl<K, V, T, D> !RefUnwindSafe for WriteHandle<K, V, T, D>

§

impl<K, V, T, D> Send for WriteHandle<K, V, T, D>

§

impl<K, V, T, D> Sync for WriteHandle<K, V, T, D>

§

impl<K, V, T, D> Unpin for WriteHandle<K, V, T, D>where T: Unpin,

§

impl<K, V, T, D> !UnwindSafe for WriteHandle<K, V, T, D>

Blanket Implementations§

source§

impl<T> Any for Twhere T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for Twhere T: ?Sized,

const: unstable · source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for Twhere T: ?Sized,

const: unstable · source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

const: unstable · source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> FutureExt for T

source§

fn with_context(self, otel_cx: Context) -> WithContext<Self>

Attaches the provided Context to this type, returning a WithContext wrapper. Read more
source§

fn with_current_context(self) -> WithContext<Self>

Attaches the current Context to this type, returning a WithContext wrapper. Read more
source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for Twhere U: From<T>,

const: unstable · source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> IntoRequest<T> for T

source§

fn into_request(self) -> Request<T>

Wrap the input message T in a tonic::Request
source§

impl<P, R> ProtoType<R> for Pwhere R: RustType<P>,

source§

impl<T> Same<T> for T

§

type Output = T

Should always be Self
source§

impl<T, U> TryFrom<U> for Twhere U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
const: unstable · source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
const: unstable · source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<V, T> VZip<V> for Twhere V: MultiLane<T>,

source§

fn vzip(self) -> V

source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more